Re: "Good PR" mechanical check
On Wednesday, 13 January 2016 at 05:19:36 UTC, H. S. Teoh wrote: There are also some (smaller) examples in std.range, such as in transposed(), where nested arrays are formatted like matrices in order to make it clear what the function is trying to do. I'm almost certain dfmt (or any mechanical formatting tool) will completely ruin this. It will. There's a solution in the form of special comments: // dfmt off auto treeStructure = [ node(0, 0), node(1, 0), node(2, 0), node(10, 2), node(3, 0) ]; // dfmt on // dfmt off stuff.map!(a => complicatedFunction(a, 100) * 20) .filter!(a => a < 2_000 && a %3 == 0) .sum(); // dfmt on These formatting exceptions are primarily semantically-motivated; as such I don't expect a mechanical tool to be able to figure it out automatically. (E.g., nested arrays in transposed() may need 2D formatting, but in byChunk() it makes more sense to format them linearly with line-wrapping.) Correct. I propose that a better approach is to automate dfmt (or whatever tool) to check PRs and highlight places where the formatting deviates from the mechanical interpretation of the rules, so that submitters can have their attention brought to potentially problematic points, and reviewers don't have to scrutinize every line, but only look at the places that may require some human judgment to decide on. This might work.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 02:12:36 UTC, tsbockman wrote: On Wednesday, 13 January 2016 at 01:43:21 UTC, John Colvin wrote: I am all for keeping it simple here, but I still think there's a problem. https://issues.dlang.org/show_bug.cgi?id=15561 That's a good point. Interesting. I often use partially-ordered objects in my code, and therefore define opCmp to return float, making use of the NaN value. But then I also define opEquals to return false for (NaN == NaN), and my custom types work as intended. In fact, the existance of the special floatingpoint operators like !<> (and beeing able to overload them) was one of the main reasons for me, to start using D. For me it's a mayor issue if those operators don't work correct. I know they are deprecated, but I don't know why. As was pointed out they are necessary if you want to implement something partially ordered. That not everybody needs this is no valid reason to deprecate it. I hated to be told I should not define opCmp to return float instead of int, as was also propagated by the "learning D" book. If this is the common state of the art, I will drop D and start using my own fork the moment they are not supported anymore.
Re: extern(C++, ns)
On 1/12/2016 8:46 PM, Manu via Digitalmars-d wrote: Of course that's an error, declaring 2 symbols with the same name at the top level of the same module is obviously an error. No D coder would expect otherwise. There's no realistic scenario that could lead to that case; why would you have an extern(C++) symbol in some module and also want to put a D symbol with the same name in the same place? If you like: extern (C++) { int a; extern (C++,ns) { int a; } } The whole point of scoped names is to be able to do this. Also, I would expect to be able to access "ns.a" with the syntax "ns.a", meaning ns has to be a scope. > Of course, it would save a lot of effort if you agreed that the design is wrong, and none of us need to do anything further. I regard it as crucial to determine the cause of your problems before assuming it is the design.
Re: [dlang.org] new forum design - preview
On Wednesday, 13 January 2016 at 06:01:41 UTC, Vladimir Panteleev wrote: http://beta.forum.dlang.org/ https://github.com/CyberShadow/DFeed/pull/51 I dislike it :( old one is better. Probably you need make content up to 100% of windows size and make forum part bigger. Also it's look like as css was missing. Same issue have vibed.org after switching to light template.
Re: [dlang.org] new forum design - preview
On Wednesday, 13 January 2016 at 06:01:41 UTC, Vladimir Panteleev wrote: http://beta.forum.dlang.org/ https://github.com/CyberShadow/DFeed/pull/51 Cool, looks nice!
[dlang.org] new forum design - preview
http://beta.forum.dlang.org/ https://github.com/CyberShadow/DFeed/pull/51
Re: "Good PR" mechanical check
On Tue, Jan 12, 2016 at 02:03:57PM -0500, Andrei Alexandrescu via Digitalmars-d wrote: > On 01/12/2016 08:42 AM, Martin Drašar via Digitalmars-d wrote: > >Wouldn't it be sufficient to mandate usage of dfmt with proper > >settings before submitting a PR? > > That would suffice at least in the beginning. We also need to put dfmt > in tools, again a project that's been in limbo for a long time. -- > Andrei In principle, I agree with mechanical checking of formatting (instead of the endless tedious nitpicking over the fine points of Phobos style), but, as Jonathan has brought up before, there are cases where human judgment is required and a tool would probably make a big mess of things. For example, certain unittests in std.datetime where customized formatting of array literals are used to make it easier to read the test cases, such as `testGregDaysBC` in std/datetime.d, as well as the several array literals inside the same unittest block. This is just one of many examples one can find in std.datetime. There are also some (smaller) examples in std.range, such as in transposed(), where nested arrays are formatted like matrices in order to make it clear what the function is trying to do. I'm almost certain dfmt (or any mechanical formatting tool) will completely ruin this. These formatting exceptions are primarily semantically-motivated; as such I don't expect a mechanical tool to be able to figure it out automatically. (E.g., nested arrays in transposed() may need 2D formatting, but in byChunk() it makes more sense to format them linearly with line-wrapping.) I propose that a better approach is to automate dfmt (or whatever tool) to check PRs and highlight places where the formatting deviates from the mechanical interpretation of the rules, so that submitters can have their attention brought to potentially problematic points, and reviewers don't have to scrutinize every line, but only look at the places that may require some human judgment to decide on. T -- "640K ought to be enough" -- Bill G. (allegedly), 1984. "The Internet is not a primary goal for PC usage" -- Bill G., 1995. "Linux has no impact on Microsoft's strategy" -- Bill G., 1999.
Re: extern(C++, ns)
New construct to solve the problem! extern(C++, nsin, nsout) The nsin is the C++ namespace to import from and nsout is the D namespace that the symbol ends up being in. You can default nsout to be local, global, or whatever one wants.
Re: extern(C++, ns)
On 13 January 2016 at 03:20, Walter Bright via Digitalmars-d wrote: > On 1/11/2016 8:02 PM, Manu via Digitalmars-d wrote: >> >> Surely the fact that people are implementing machinery to undo the >> damage done is a strong indication that they don't want the feature. >> Please, can anyone produce an argument in favour...? Otherwise just >> accept that it was a bad idea and eject it into space. >> Why could anyone be attached to it? > > > > I already did for your scheme: > > int a; > extern (C++,ns) { int a; } // error! Of course that's an error, declaring 2 symbols with the same name at the top level of the same module is obviously an error. No D coder would expect otherwise. There's no realistic scenario that could lead to that case; why would you have an extern(C++) symbol in some module and also want to put a D symbol with the same name in the same place? The only case I can imagine is you might want an extern(D) overload which wraps an extern(C++) function, but that works fine so there's no issue there. > The whole point of namespaces in C++ is to introduce scoped names. But we already have scoped names in D, so implementing that semantic for that reason has no value in D. We just want to link to C++ code. We don't want C++ concepts invading D, especially when the problem is already solved. We have a well defined module system. There's no reason, and no desire to deviate from that in any way whatsoever. I suggest that, if you're convinced namespaces must exist to address the non-issue you present above, then it should be a separate opt-in feature. As far as I know, we're yet to hear from someone who wants that behaviour by default. > Not putting them in a scope might work for your project, but in general it > will > not, and the workarounds you suggested for it (putting them in separate > modules) are awkward. This seems to be the foundation of our disagreement. I can't understand how you can make this claim, please demonstrate how using modules is awkward? It's not awkward to put your code in a module, it's normal... infact, it's impossible to put D code anywhere other than in a module. All D code is in a module... so, how can that possibly be considered awkward or inconvenient? Organising modules *IS* coding in D, and a user of extern(C++) wouldn't expect that they are suddenly following a different set of patterns or rules. They will continue to organise their code in modules however they usually do, in whatever way best suits their project or API. > All (*) the specific examples you've posted about fundamental problems with > the current scheme have been adequately addressed (bugs were fixed, and > misunderstandings clarified). Nobody in this thread has been able to > determine, with an example, what all the other problems you talk about are. Sure, bugs have been fixed, and I'm certain we can further make this 'work' as the design intends, but that has come at great cost in time and energy so far; a cost which never needed to be spent, and further effort is required. But the point is, making this work as designed doesn't necessarily put us in a state we're happy with. Rainer presented a case for confusion; you're making an argument that the scope is to make it feel kinda like C++ code, but it doesn't really behave like C++ at all. You can re-open namespaces and add symbols into existing namespaces until the cows come home in C++, but in D you end out with multiple definitions of the same named namespaces in different modules, and then you need to use the full name (including the module) to disambiguate... which surely makes plain the reality of the situation, which is that the namespace **is the module**, not the C++ namespace. Requiring the full module name to disambiguate leaves the namespace redundant. I have never, not ever, seen a C++ program where the same namespace isn't used in basically every source file. After mapping those source files to D modules, you will have the same namespace defined in every module, at which point the namespace only acts against it's stated worth. Further, the namespace almost certainly collides with D's top level package name. Other problems like Daniel presented regarding reflection are just annoying work which doesn't need to happen. Reflection of the kind that scans scopes for symbols requires new logic which recurses into C++ namespaces, and no existing reflection code will have support for that. (Do we actually have __traits which can do that currently? Can we detect if symbol is a C++ namespace?) One workaround for that is to alias the symbols from the namespace into the outer scope, which is just a bunch of pointless work and noise in the code, but we then want to make the namespace itself private such that the aliases become the API. Since alias is unable to make private symbols visible by aliasing them into a public scope, that leads to effort to create awkward kludges like Daniel presented to hide the namespace, or the approach I used whi
Re: Today was a good day
On 01/12/2016 08:31 PM, Timon Gehr wrote: - This probably does not make a large difference, but one can find the median of five elements using only at most 6 comparisons. (And without moving the elements.) [1] Moving the elements actually does help. - getPivot selects indices which depend deterministically on the range length. Therefore afaics it is now possible to create inputs that force quadratic runtime for topN. (E.g. an input can be crafted such that getPivot always picks the third-largest element in the range.) This violates the running time bound given in the documentation. I tried a static rng but found out that pure functions call sort(). Overall I'm not that worried about attacks on sort(). Andrei
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 01:43:21 UTC, John Colvin wrote: I am all for keeping it simple here, but I still think there's a problem. https://issues.dlang.org/show_bug.cgi?id=15561 That's a good point.
Re: Add "setBinaryMode" and "setTextMode" to stdio?
On Wednesday, 13 January 2016 at 00:13:13 UTC, Johan Engelen wrote: Should I work on a PR for setBinaryMode+setTextMode ? std.stdio is intended as a wrapper around stdio.h, which I don't think supports setting the mode post-fopen, but if we can support those operations for all practical targets then I think they would be nice additions nevertheless.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 01:39:26 UTC, John Colvin wrote: On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: [...] I would completely agree, except that we have builtin types that don't obey this rule. I'd be all in favour of sticking with total orders, but it does make it hard (impossible?) to make a proper drop-in replacement for the builtin floating point numbers (including wrappers, e.g. std.typecons.Typedef can't handle nans correctly) or to properly handle comparisons between custom types and builtin floating points (as mentioned by tsbockman). I am all for keeping it simple here, but I still think there's a problem. https://issues.dlang.org/show_bug.cgi?id=15561
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: On 01/12/2016 06:52 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 22:28:13 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei Ok. Would you consider any solution, or is that a "leave it broken"? I'd leave it to a named function. Using the built-in comparison for exotic orderings is bound to confuse users. BTW not sure you know, but D used to have a number of floating point operators like !<>=. Even those didn't help. -- Andrei I would completely agree, except that we have builtin types that don't obey this rule. I'd be all in favour of sticking with total orders, but it does make it hard (impossible?) to make a proper drop-in replacement for the builtin floating point numbers (including wrappers, e.g. std.typecons.Typedef can't handle nans correctly) or to properly handle comparisons between custom types and builtin floating points (as mentioned by tsbockman). I am all for keeping it simple here, but I still think there's a problem.
Re: Today was a good day
On 01/12/2016 04:15 AM, Andrei Alexandrescu wrote: A few primitive algorithms got quite a bit quicker. https://github.com/D-Programming-Language/phobos/pull/3921 https://github.com/D-Programming-Language/phobos/pull/3922 Destroy! Andrei Nice. - This probably does not make a large difference, but one can find the median of five elements using only at most 6 comparisons. (And without moving the elements.) [1] - getPivot selects indices which depend deterministically on the range length. Therefore afaics it is now possible to create inputs that force quadratic runtime for topN. (E.g. an input can be crafted such that getPivot always picks the third-largest element in the range.) This violates the running time bound given in the documentation. [1]: int median(int[5] a){ return a[a[0] }
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: I'd leave it to a named function. Using the built-in comparison for exotic orderings is bound to confuse users. BTW not sure you know, but D used to have a number of floating point operators like !<>=. Even those didn't help. -- Andrei Although I would have use for "exotic orderings" in some of my own code, I think this is the right decision. Really the only reason I'm tempted to say they should be allowed, is to smooth interaction with floating-point. But, I think what that really means is that the design of the floating-point comparisons is bad. (Which is not D's fault, I know.)
Re: IPFS is growing and Go, Swift, ruby, python, rust, C++, etc are already there
On Tuesday, 12 January 2016 at 23:26:07 UTC, karabuta wrote: It will probably take over HTTP. Not in your life time. This sounds like a glorified mesh network.
Re: IPFS is growing and Go, Swift, ruby, python, rust, C++, etc are already there
On Wednesday, 13 January 2016 at 00:00:17 UTC, israel wrote: On Tuesday, 12 January 2016 at 23:26:07 UTC, karabuta wrote: Anyone has the fuel and time to take the initiative? It will probably take over HTTP. Currently implemented in Go with JavaScript and Python on the way. However it seems most other programming languages have API Client Libraries. Sup? :) http://ipfs.io/ https://github.com/ipfs/ipfs So is this Blue ray vs HD DVD all over again? https://zeronet.io/ Similar but different use case. https://www.reddit.com/r/zeronet/comments/3rhmt5/what_are_the_main_differnece_between_ipfs_and/
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 06:52 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 22:28:13 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei Ok. Would you consider any solution, or is that a "leave it broken"? I'd leave it to a named function. Using the built-in comparison for exotic orderings is bound to confuse users. BTW not sure you know, but D used to have a number of floating point operators like !<>=. Even those didn't help. -- Andrei
Add "setBinaryMode" and "setTextMode" to stdio?
Hi all, To fix EOL writing with "dfmt ---> stdout" on Windows, stdout has to be set to binary mode [1]. The code for this is non-trivial, and some DMD internal magic is needed: version(Windows) { // See Phobos' stdio.File.rawWrite { import std.stdio; immutable fd = fileno(stdout.getFP()); setmode(fd, _O_BINARY); version(CRuntime_DigitalMars) { import core.atomic : atomicOp; atomicOp!"&="(__fhnd_info[fd], ~FHND_TEXT); } } } I think it'd be very nice to have stdio.File.setBinaryMode() and stdio.File.setTextMode(). In dfmt's case, rawWrite is not available because stdout.lockingTextWriter() is used, which only has put(). Should I work on a PR for setBinaryMode+setTextMode ? thanks, Johan
Re: IPFS is growing and Go, Swift, ruby, python, rust, C++, etc are already there
On Tuesday, 12 January 2016 at 23:26:07 UTC, karabuta wrote: Anyone has the fuel and time to take the initiative? It will probably take over HTTP. Currently implemented in Go with JavaScript and Python on the way. However it seems most other programming languages have API Client Libraries. Sup? :) http://ipfs.io/ https://github.com/ipfs/ipfs So is this Blue ray vs HD DVD all over again? https://zeronet.io/
Re: Anyone using DMD to build 32bit on OS X?
On 1/12/2016 2:20 PM, bitwise wrote: On Tuesday, 12 January 2016 at 21:28:43 UTC, Walter Bright wrote: On 1/12/2016 12:36 PM, Andrei Alexandrescu wrote: On 01/12/2016 03:30 PM, Jacob Carlborg wrote: On 2016-01-12 17:48, Walter Bright wrote: From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. I came to the same conclusions. I'll update the PR. Thanks much for this work! -- Andrei I agree! Going to native TLS is the right way forward for 64 bits. Would having shared libraries for 64bit only be ok too? If they're not already there for 32 bit, then yes. I would like to avoid having to update the emulated TLS if possible. Thanks, Bit
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 22:28:13 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei Ok. Would you consider any solution, or is that a "leave it broken"? I think I can find a way around the problem for my purposes in the short term. However, for other people implementing custom types I think it is important, it's a dirty corner that needs sorting out. The more you get to know D, the more of them you find, the more frustrating it gets seeing they aren't likely to get fixed...
Re: [dlang.org] getting the redesign wrapped up
On 01/12/2016 03:12 PM, anonymous wrote: On 12.01.2016 08:24, Vladimir Panteleev wrote: > Nice. Is it responsive? As responsive as the main site. I just updated the dlang.org submodule and fixed what got broken. I'm mostly done now. Pull request is over here: https://github.com/CyberShadow/DFeed/pull/51 Here's the download source: https://github.com/braddr/downloads.dlang.org -- Andrei
IPFS is growing and Go, Swift, ruby, python, rust, C++, etc are already there
Anyone has the fuel and time to take the initiative? It will probably take over HTTP. Currently implemented in Go with JavaScript and Python on the way. However it seems most other programming languages have API Client Libraries. Sup? :) http://ipfs.io/ https://github.com/ipfs/ipfs
Re: D for TensorFlow-like library
On Sunday, 8 November 2015 at 17:47:33 UTC, Muktabh wrote: We cannot make D bindings to it because it is a closed source project by Google and only a spec like mapreduce will be released, so I thought maybe I might try and come up with an open source implementation. The github repository looks pretty open-source to me: https://github.com/tensorflow/tensorflow
Re: D for TensorFlow-like library
I could perhaps help out in making this library. I was just looking for machine learning libraries for D, in particular for doing deep learning, but it doesn't seem like there are any since this thread came up at top when I search for it at Google. Or are there? Also, if the library is going to support GPU acceleration, which it has if it is to be at least somewhat competitive, it would be great if it could use OpenCL to support non-NVIDIA graphics cards, since I only have an Intel graphics controller... In fact, it might even become the *only* deep leaning library that supports non-NVIDIA GPUs natively (by judging from this thread: https://community.amd.com/thread/170336), which would be really nice :) Did you start any development on it?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 04:06 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 19:50:57 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:48:35 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:46:47 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan)); Agreed, but in case of float '<=' is not an order at all. By the way, that implies that the result of sorting an array of float by default comparison is undefined unless the array does not contain NaN. Didn't think of that. Yikes. Should we change the default predicate of std.algorithm.sort to std.math.cmp when ElementType!R is floating point? We're fine as we are. By default sort compares with "<". -- Andrei
Re: Anyone using DMD to build 32bit on OS X?
On Tuesday, 12 January 2016 at 21:28:43 UTC, Walter Bright wrote: On 1/12/2016 12:36 PM, Andrei Alexandrescu wrote: On 01/12/2016 03:30 PM, Jacob Carlborg wrote: On 2016-01-12 17:48, Walter Bright wrote: From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. I came to the same conclusions. I'll update the PR. Thanks much for this work! -- Andrei I agree! Going to native TLS is the right way forward for 64 bits. Would having shared libraries for 64bit only be ok too? I would like to avoid having to update the emulated TLS if possible. Thanks, Bit
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 21:04:33 UTC, Jacob Carlborg wrote: On 2016-01-12 15:53, Adam D. Ruppe wrote: I'm not sure if git supports this but I think it should be done fully automatically. Not even something the user runs, just when they open the pull request, it reformats the code. The hook/tool would need to do a commit with the changes. How would what work? The tool wouldn't have commit access to the repository from where the PR originates. It would also create a new commit hash that wouldn't match with what the user have locally. Yeah, don't know how that can be made to work. The closest I got was https://help.github.com/articles/about-webhooks/ at least then you can check whether dfmt agrees with the pull request (and block it) But people still need to manually run the thing.
Re: Anyone using DMD to build 32bit on OS X?
On 1/12/2016 12:36 PM, Andrei Alexandrescu wrote: On 01/12/2016 03:30 PM, Jacob Carlborg wrote: On 2016-01-12 17:48, Walter Bright wrote: From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. I came to the same conclusions. I'll update the PR. Thanks much for this work! -- Andrei I agree! Going to native TLS is the right way forward for 64 bits.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 10:02 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } what about classes and Object.opCmp? You can introduce a new opCmp signature in your subclass, but == is enforced to be reflexive for class objects. So this approach only really works for structs. (And for structs, it is obviously a hack.)
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 21:12:08 UTC, tsbockman wrote: On Tuesday, 12 January 2016 at 20:56:41 UTC, John Colvin wrote: Please consider the second design I proposed? It's small, simple, has no impact on existing code and works in the right direction (library types can emulate / act as replacements for builtins) as opposed to the other way (library types are second class). If non-total ordering is going to be supported, I don't understand what's wrong with just allowing this: bool opCmp(string op, T)(T right) const { } As an alternative to the current: bool opEquals(T)(T right) const { } int opCmp(T)(T right) const { } Make it a compile-time error for a type to implement both. There is no need to deprecate the current system - people can even be encouraged to continue using it, in the very common case where it can actually express the desired logic. This approach is simple and breaks no existing code. It is also optimally efficient with respect to runtime performance. I would kindof like that (it would definitely allow me to do what I want, as well as anything else I have failed to notice I need yet), but it flies quite strongly against Walter's (and mine to some extent) views that we'll only end up with C++-like abuse of the overloading if we allow that. Having > and < overloaded separately is asking for trouble. Another possibility would be to introduce opCmpEquals(T)(T rhs) to handle [<>]= explicitly.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 21:06:40 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:50:57 UTC, Fool wrote: By the way, that implies that the result of sorting an array of float by default comparison is undefined unless the array does not contain NaN. Didn't think of that. Yikes. Should we change the default predicate of std.algorithm.sort to std.math.cmp when ElementType!R is floating point? That depends on whether marketing decides to emphasize safety over performance. I'm glad that I'm not in charge! ;-)
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:56:41 UTC, John Colvin wrote: Please consider the second design I proposed? It's small, simple, has no impact on existing code and works in the right direction (library types can emulate / act as replacements for builtins) as opposed to the other way (library types are second class). If non-total ordering is going to be supported, I don't understand what's wrong with just allowing this: bool opCmp(string op, T)(T right) const { } As an alternative to the current: bool opEquals(T)(T right) const { } int opCmp(T)(T right) const { } Make it a compile-time error for a type to implement both. There is no need to deprecate the current system - people can even be encouraged to continue using it, in the very common case where it can actually express the desired logic. This approach is simple and breaks no existing code. It is also optimally efficient with respect to runtime performance.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:50:57 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:48:35 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:46:47 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan)); Agreed, but in case of float '<=' is not an order at all. By the way, that implies that the result of sorting an array of float by default comparison is undefined unless the array does not contain NaN. Didn't think of that. Yikes. Should we change the default predicate of std.algorithm.sort to std.math.cmp when ElementType!R is floating point?
Re: "Good PR" mechanical check
On 2016-01-12 15:53, Adam D. Ruppe wrote: I'm not sure if git supports this but I think it should be done fully automatically. Not even something the user runs, just when they open the pull request, it reformats the code. The hook/tool would need to do a commit with the changes. How would what work? The tool wouldn't have commit access to the repository from where the PR originates. It would also create a new commit hash that wouldn't match with what the user have locally. -- /Jacob Carlborg
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } what about classes and Object.opCmp?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } Interesting, I'll have to think more about this. Pretty ugly to have to use floating point instructions for every comparison, no matter the actually data, but maybe there's something here...
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:04:26 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:01 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 19:28:36 UTC, Andrei Alexandrescu wrote: On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei If by that you mean "Only use <= or >= on data that defines a total ordering"* I guess it would work, but it has some pretty big downsides: 1) Annoying to use. 2) You have to use the opCmp return 0 (which normally means a[<>]=b && b[<>]=a) to mean "not comparable". 3) Not enforceable. Because of 2 you'll always get true if you use >= or <= on any a pair that doesn't have a defined ordering. 4) inefficient (have to do both < and == separately which can be a lot more work than <=). *would be safer to say "types that define", but strictly speaking... I'd be in favor of giving people the option to disable the use of <= and >= for specific data. It's a simple and logical approach. -- Andrei Having thought about this a bit more, it doesn't fix the problem: It doesn't enable custom float types that are on par with builtins, doesn't enable transparent "missing-value" types and doesn't make tsbockmans checked integer types (or other custom types) work properly and transparently with builtin floats. The points 1, 2 and 4 from above still stand. Also - the big problem - it requires antisymmetry, which means no preorders. One of the great things about D's opCmp and opEquals is that it separates `a==b` from `a<=b && b<=a`, which enables it to express types without antisymmetric ordering (see original post for examples), what you're describing would be a frustrating situation where you have to choose between breaking antisymmetry and breaking totality, but never both. Please consider the second design I proposed? It's small, simple, has no impact on existing code and works in the right direction (library types can emulate / act as replacements for builtins) as opposed to the other way (library types are second class).
Re: Tutorials section on vibed.org
On Monday, 4 January 2016 at 09:53:45 UTC, Sönke Ludwig wrote: I've just added a sub page on vibed.org to collect links to all existing vibe.d tutorials [1]. If you know of any additional ones, or would like to have an existing one removed, please leave a quick comment: http://forum.rejectedsoftware.com/groups/rejectedsoftware.vibed/thread/29242/ Thanks! [1]: https://vibed.org/tutorials http://d.readthedocs.org/en/latest/examples.html#web-application
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); }
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:25:25 UTC, Andrei Alexandrescu wrote: D uses !(b < a) for a <= b. We can invent notation to disallow that rewrite. Anyhow the use of <, >, <=, and >= for partially ordered types is bound to be less than smooth. Math papers and books often use other notations (such as rounded or square less-than) to denote operators for partially ordered data, exactly because denoting them with the classic notation may confuse the reader. Andrei It is perfectly fine to use !(b < a) for a <= b. But as John has pointed out this is sensible only if '<=' is total. Personally, I'm unsure about the best solution for D. I understand Walter's argument to 'keep it simple' and do not support non-total opCmp. On the other hand it is a bit unsatisfactory that one cannot write a custom type that behaves like float.
Re: Anyone using DMD to build 32bit on OS X?
On 01/12/2016 03:30 PM, Jacob Carlborg wrote: On 2016-01-12 17:48, Walter Bright wrote: From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. I came to the same conclusions. I'll update the PR. Thanks much for this work! -- Andrei
Re: Anyone using DMD to build 32bit on OS X?
On 2016-01-12 17:48, Walter Bright wrote: From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. I came to the same conclusions. I'll update the PR. -- /Jacob Carlborg
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 03:10 PM, Fool wrote: On Tuesday, 12 January 2016 at 20:04:26 UTC, Andrei Alexandrescu wrote: I'd be in favor of giving people the option to disable the use of <= and >= for specific data. It's a simple and logical approach. -- Andrei But doesn't the symbol <= originate from ORing < and = ? D uses !(b < a) for a <= b. We can invent notation to disallow that rewrite. Anyhow the use of <, >, <=, and >= for partially ordered types is bound to be less than smooth. Math papers and books often use other notations (such as rounded or square less-than) to denote operators for partially ordered data, exactly because denoting them with the classic notation may confuse the reader. Andrei
Re: [dlang.org] getting the redesign wrapped up
On 12.01.2016 08:24, Vladimir Panteleev wrote: > Nice. Is it responsive? As responsive as the main site. I just updated the dlang.org submodule and fixed what got broken. I'm mostly done now. Pull request is over here: https://github.com/CyberShadow/DFeed/pull/51
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:04:26 UTC, Andrei Alexandrescu wrote: I'd be in favor of giving people the option to disable the use of <= and >= for specific data. It's a simple and logical approach. -- Andrei But doesn't the symbol <= originate from ORing < and = ?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:10:11 UTC, Fool wrote: But doesn't the symbol <= originate from ORing < and = ? '=' in the mathematical sense.
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 13:34:25 UTC, Andrei Alexandrescu wrote: Related to https://github.com/D-Programming-Language/dlang.org/pull/1191: A friend who is in the GNU community told me a while ago they have a mechanical style checker that people can run against their proposed patches to make sure the patches have a style consistent with the one enforced by the GNU style. I forgot the name, something like "good-patch". I'll shoot him an email. Similarly, I think it would help us to release a tool in the tools/ repo that analyzes a would-be Phobos pull request and ensures it's styled the same way as most of Phobos: braces on their own lines, whitespace inserted a specific way, no hard tabs, etc. etc. Then people can run the tool before even submitting a PR to make sure there's no problem of that kind. Over the years I've developed some adaptability to style, and I can do Phobos' style no problem even though it wouldn't be my first preference (I favor Egyptian braces). But seeing a patchwork of styles within the same project, same file, and sometimes even the same pull request is quite jarring at least to me. Who would like to embark on writing such a tool? Thanks, Andrei There is already an attempt to it in DMD: https://github.com/D-Programming-Language/dmd/blob/master/src/checkwhitespace.d Agree it would be useful to have a tool which needs to be part of the auto-tester (I think it's called from the Makefile for DMD). We could also provide a `post-commit-hook` that people can use so git checks it automatically. It needs to be checked on a per-commit basis because else we'll end up with commits affecting each other, which is no good either.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:28:36 UTC, Andrei Alexandrescu wrote: On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei If by that you mean "Only use <= or >= on data that defines a total ordering"* I guess it would work, but it has some pretty big downsides: 1) Annoying to use. 2) You have to use the opCmp return 0 (which normally means a[<>]=b && b[<>]=a) to mean "not comparable". 3) Not enforceable. Because of 2 you'll always get true if you use >= or <= on any a pair that doesn't have a defined ordering. 4) inefficient (have to do both < and == separately which can be a lot more work than <=). *would be safer to say "types that define", but strictly speaking...
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 03:01 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 19:28:36 UTC, Andrei Alexandrescu wrote: On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei If by that you mean "Only use <= or >= on data that defines a total ordering"* I guess it would work, but it has some pretty big downsides: 1) Annoying to use. 2) You have to use the opCmp return 0 (which normally means a[<>]=b && b[<>]=a) to mean "not comparable". 3) Not enforceable. Because of 2 you'll always get true if you use >= or <= on any a pair that doesn't have a defined ordering. 4) inefficient (have to do both < and == separately which can be a lot more work than <=). *would be safer to say "types that define", but strictly speaking... I'd be in favor of giving people the option to disable the use of <= and >= for specific data. It's a simple and logical approach. -- Andrei
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:48:35 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:46:47 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan)); Agreed, but in case of float '<=' is not an order at all. By the way, that implies that the result of sorting an array of float by default comparison is undefined unless the array does not contain NaN.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:21:47 UTC, John Colvin wrote: Note that a non-reflexive <= doesn't imply anything about ==. Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan));
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:46:47 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan)); Agreed, but in case of float '<=' is not an order at all.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:21:47 UTC, John Colvin wrote: Note that a non-reflexive <= doesn't imply anything about ==. Non-reflexive '<=' does not make any sense at all.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:13:29 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:00:11 UTC, Andrei Alexandrescu wrote: On 01/12/2016 01:27 PM, John Colvin wrote: Preorder or partial order: not possible in D, opCmp insists on totality. The way I look at it, a partial order would implement opCmp and opEqual such that a < b, b < a, and a == b are simultaneously false for unordered objects. Would that float your boat? -- Andrei a<=b and b<=a must also be false. That would work for a partial order, yes. Unfortunately, that's not possible with the current opCmp design, hence my 2 suggestions for improvements (I'm pretty sure the second one is better). The key thing is to have a design that doesn't enforce totality. s/totality/reflexivity which also implies it can't force totality. Note that a non-reflexive <= doesn't imply anything about ==.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:00:11 UTC, Andrei Alexandrescu wrote: On 01/12/2016 01:27 PM, John Colvin wrote: Preorder or partial order: not possible in D, opCmp insists on totality. The way I look at it, a partial order would implement opCmp and opEqual such that a < b, b < a, and a == b are simultaneously false for unordered objects. Would that float your boat? -- Andrei a<=b and b<=a must also be false. That would work for a partial order, yes. Unfortunately, that's not possible with the current opCmp design, hence my 2 suggestions for improvements (I'm pretty sure the second one is better). The key thing is to have a design that doesn't enforce totality.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 18:27:15 UTC, John Colvin wrote: P.S. This is not just about floats! This also affects any custom numeric type which should be comparable with float - while working on a checked integer type for Phobos, one of the (minor) problems I have run into is that it is impossible to reproduce the comparison behaviour of the built-in integers with respect to floating-point values - even though the latest version of my checked integer type actually has no "NaN" state of its own.
Re: "Good PR" mechanical check
On 01/12/2016 01:25 PM, tsbockman wrote: On Tuesday, 12 January 2016 at 13:34:25 UTC, Andrei Alexandrescu wrote: [...] I realize that dfmt may need some upgrades first, but isn't it about time to just suck it up and dfmt the whole of phobos and druntime? It will mess with the "git blame", true - but it will do so *once* and end tedious manual inspection of formatting for pull requests forever. I would support that. We need a strong champion there. Brian wanted to get to it for a long time, but didn't muster the time. -- Andrei
Re: "Good PR" mechanical check
On 01/12/2016 08:42 AM, Martin Drašar via Digitalmars-d wrote: Wouldn't it be sufficient to mandate usage of dfmt with proper settings before submitting a PR? That would suffice at least in the beginning. We also need to put dfmt in tools, again a project that's been in limbo for a long time. -- Andrei
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On 01/12/2016 01:27 PM, John Colvin wrote: Preorder or partial order: not possible in D, opCmp insists on totality. The way I look at it, a partial order would implement opCmp and opEqual such that a < b, b < a, and a == b are simultaneously false for unordered objects. Would that float your boat? -- Andrei
Re: D and C APIs
Many times I've considered simply incorporating a C compiler into the D compiler, and then: import "stdio.h"; The perennial problem, however, is the C preprocessor and all the bizarre things people do with it in even the most mundane header files. The problem is NOT, however, implementing the preprocessor, as that's already done. The problem is C #include files do not exist in a vacuum; they derive their meaning based on the compiler used, the predefined macros, and the host file they were #include'd in. Well, and the inevitable C language extensions, different for every compiler, and the developers determined to use every last one of them :-( Then there's what type does 'char' map to? When the D compiler guesses wrong about all these issues, there's no way to see or correct what it did, and so we wind up with endless "bugs" and support issues. It's better to have a separate translator, and the user can tweak the results of it.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 18:36:32 UTC, Ilya Yaroshenko wrote: On Tuesday, 12 January 2016 at 18:27:15 UTC, John Colvin wrote: Background: Some important properties for binary relations on sets that are somewhat similar to the normal ≤/≥ on the real numbers or integers are: [...] http://dlang.org/phobos/std_math.html#.cmp ? --Ilya That doesn't solve the whole problem, because std.math.cmp isn't the default comparator you can't use a totally ordered float type as a drop-in for the builtin float types. A more interesting question it bring up though is: does the approach of imposing a (somewhat arbitrary) total order work for other types where you would normally use a less "strict" ordering? Does it work well for missing data representations?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 18:27:15 UTC, John Colvin wrote: Background: Some important properties for binary relations on sets that are somewhat similar to the normal ≤/≥ on the real numbers or integers are: [...] http://dlang.org/phobos/std_math.html#.cmp ? --Ilya
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 18:25:48 UTC, tsbockman wrote: On Tuesday, 12 January 2016 at 13:34:25 UTC, Andrei Alexandrescu wrote: [...] I realize that dfmt may need some upgrades first, but isn't it about time to just suck it up and dfmt the whole of phobos and druntime? It will mess with the "git blame", true - but it will do so *once* and end tedious manual inspection of formatting for pull requests forever. Doubt I'm alone in that I'd be more willing to contribute if phobos and druntime used dfmt. I personally use a style near the complete opposite of phobos/druntime (2 spaces per indent, OTBS etc.) It's a chore to have to scrutinize every line to make sure it matches the style guide instead of just running it through a tool.
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 17:22:16 UTC, Walter Bright wrote: On 1/12/2016 6:53 AM, Adam D. Ruppe wrote: I'm pretty sure dfmt is up to the task in 99% of cases already. The last 1% always takes 99% of the dev time :-( But in this case, the 1% doesn't actually have to be fixed (although of course, the smaller the better), it's just the 1% of the work left to be done manually, where currently we do 100% manually.
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 13:34:25 UTC, Andrei Alexandrescu wrote: [...] I realize that dfmt may need some upgrades first, but isn't it about time to just suck it up and dfmt the whole of phobos and druntime? It will mess with the "git blame", true - but it will do so *once* and end tedious manual inspection of formatting for pull requests forever.
opCmp, [partial/total/pre]orders, custom floating point types etc.
Background: Some important properties for binary relations on sets that are somewhat similar to the normal ≤/≥ on the real numbers or integers are: a ≤ a (reflexivity); if a ≤ b and b ≤ a, then a = b (antisymmetry); if a ≤ b and b ≤ c, then a ≤ c (transitivity); a ≤ b or b ≤ a (totality, implies reflexivity); Definitions: A preorder obeys reflexivity and transitivity. A partial order obeys reflexivity, transitivity and antisymmetry. A total order obeys transitivity, antisymmetry and totality. A total preorder obeys transitivity and totality but not antisymmetry Examples: Arrays ordered by length, vectors ordered by euclidian length, complex numbers ordered by absolute value etc. are all total preorders. Integers with ≤ or ≥ form a total order. float/double/real obey antisymmetry and transitivity but not reflexivity or totality. Implementations in D: Total order: opCmp with "consistent" opEquals to enforce antisymmetry. Total preorder: opCmp with "inconsistent" opEquals to break antisymmetry. Preorder or partial order: not possible in D, opCmp insists on totality. Antisymmetry and transitivity but not reflexivity or totality, e.g. custom float: not possible in D, opCmp insists on totality (no way for opCmp to signify nan comparisons, either with nan (reflexivity) or others (totality & reflexivity)). Solutions to the above problems: 1) opCmp - or some extended, renamed version of it - needs 4 return values: greater, lesser, equal and neither/unequal/incomparible. This would be the value that is returned when e.g. either side is nan. or, less intrusively and more (runtime) efficiently: 2) Introduce a new special function `bool opCmpOrdered(T rhs)` that, if defined, is used to shortcircuit a comparison. Any previous lowering to `a.opCmp(b) [<>]=? 0` (as in https://dlang.org/spec/operatoroverloading.html#compare) would now lower to `a.opCmpOrdered(b) && a.opCmp(b) [<>]=? 0`. E.g. `a >= b` becomes `a.opCmpOrdered(b) && a.opCmp(b) >= 0`. If opCmpOrdered isn't defined the lowering is unchanged from before (or opCmpOrdered defaults to true, same thing...). Bigger example: a custom float type struct MyFloat { // ... bool isNaN() { /* ... */ } bool opCmpOrdered(MyFloat rhs) { if (this.isNaN || rhs.isNaN) return false; else return true; } int opCmp(MyFloat rhs) { //can assume neither are nan /* ... */ } bool opEquals(MyFloat rhs) { if (this.isNaN || rhs.isNaN) return false; else /* ... */ } } unittest { MyFloat a, b; // has .init as nan, of course :) static allFail(MyFloat a, MyFloat b) { // all of these should short-circuit because // opCmpOrdered will return false assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } allFail(a, b); a = 3; allFail(a, b); b = 4; assert(a!=b); assert(ab)); assert(!(a>=b)); a = 4; assert(a==b); assert(!(ab)); assert(a>=b); } P.S. This is not just about floats! It is also very useful for making types that represent missing data (e.g. encapsulating using int.min for a missing value). I can only come up with strained examples for preorders and partial orders that I would want people using < and > for, so I won't speak of them here. P.P.S. Note that I am *not* trying to extend D's operator overloading to make > and < usable for arbitrary binary relations, like in C++. This small change is strictly within the realm of what <, > and = are already used for (in D, with floats). I'm convinced that if you wouldn't read it out loud as something like "less/fewer/smaller than" or "greater/more/bigger than", you shouldn't be using < or >, you should name a separate function; I don't think this proposal encourages violating that principle.
Re: D and C APIs
On Tuesday, 12 January 2016 at 16:21:40 UTC, Atla Neves wrote: In C/C++, a change to the headers causes a recompilation which will fail if there are API changes. From any other language, it'll compile, link, and fail at runtime (unless the symbols change name). If you're lucky in an obvious way. Atila It's true that working directly with the upstream header file will catch errors that are not caught at compile time in other languages, but that's only an issue if you're not testing the calls you're making to the API. A couple of simple examples where replacing the old header with the new is not a good substitute for testing in the presence of arbitrary changes: 1. There's a change in the return value from 0=success to 0=failure. Then your program every so often dies with a segfault somewhere else, and it takes two days to track down the problem. 2. The function body stays the same but the signature changes from int foo(int x, int y) to int foo(int y, int x). Suddenly you're dividing by zero with previously working code.
If Java Were Designed Today: The Synchronizable Interface
http://blog.jooq.org/2016/01/12/if-java-were-designed-today-the-synchronizable-interface/ D's synchronized classes and statements are (AFAIK) very similar to Java's so I thought this might spark an interesting discussion.
Re: extern(C++, ns)
On 1/11/2016 8:02 PM, Manu via Digitalmars-d wrote: Surely the fact that people are implementing machinery to undo the damage done is a strong indication that they don't want the feature. Please, can anyone produce an argument in favour...? Otherwise just accept that it was a bad idea and eject it into space. Why could anyone be attached to it? I already did for your scheme: int a; extern (C++,ns) { int a; } // error! The whole point of namespaces in C++ is to introduce scoped names. Not putting them in a scope might work for your project, but in general it will not, and the workarounds you suggested for it (putting them in separate modules) are awkward. All (*) the specific examples you've posted about fundamental problems with the current scheme have been adequately addressed (bugs were fixed, and misunderstandings clarified). Nobody in this thread has been able to determine, with an example, what all the other problems you talk about are. (*) except the delegate one, and there are a couple solutions on the table for that. I want to deal with the problems you're having, and find a solution. But without examples illustrating them, I am dead in the water and cannot help.
Re: "Good PR" mechanical check
On 1/12/2016 6:53 AM, Adam D. Ruppe wrote: I'm pretty sure dfmt is up to the task in 99% of cases already. The last 1% always takes 99% of the dev time :-(
Re: extern(C++, ns)
On Tuesday, 12 January 2016 at 04:29:00 UTC, Manu wrote: Can you see the pointless-ness of this feature and the effort being asked? It is against my interest to spend time (that I don't have) to make this feature work, when I am 100% convinced it is a massive anti-feature and I just want to see it ejected into space. It creates nothing but edges and offers nothing. Not a single advantage that anyone has yet been able present. The solution is simple, and solves every related issue instantly. There's no point wasting time identifying and fixing bugs in an implementation that shouldn't exist in the first place. This never should have happened, we can correct it easily, no effort from anyone is required, and we can all get on with something else. Do you also feel somehow emotionally attached to this? Give me a thread of logic to grasp on to; there is no way I can imagine to objectively balance the existence of this feature against the problems. We already see here demonstrated evidence of someone other than me going out of their way to awkwardly eliminate the namespace from existence. It should at least be opt-in by default. Anyway, I'm out, I'll be back when I find time for another round at this code. For what it's worth, I'm completely flabbergasted by the fact it somehow introduces a "named scope". So, my understanding of the pros: - It makes it easy to mirror the organization of your C++ code: Maybe. You probably already have thought of the organization of your modules, which takes care of that job for you in D. This seems to be the one selling point. If there are others, please do elucidate. Now, the cons: - No opt-out. There are workarounds, but that's not exactly a good argument in favor. - It can already be done with existing language features. This is in my opinion a big one. Whenever other language features are suggested, this argument is used to shoot it down. Why not so with this feature? - Implementation problems. Not exactly a heavy argument against a feature, but if there are many problems with it, it's a hint that the design may be flawed. The arguments (that I see) seem to favor not introducing a new symbol.
Re: "Good PR" mechanical check
I think using dfmt for this is a good idea. If there any problems with dfmt which would prevent it from being used on Phobos, the problems can be patched and then that would strengthen dfmt.
Re: Anyone using DMD to build 32bit on OS X?
On 1/10/2016 9:12 AM, Jacob Carlborg wrote: I've implemented native TLS in DMD on OS X for 64bit. Now the question is, does it need to work for 32bit as well? The easiest would be to drop the 32bit support all together. Other options would be to continue to use emulate TLS on 32bit or implement native TLS for 32bit as well. I would prefer to not have to do this for 32bit as well. As far as I know we haven't released a 32bit binary of DMD for a very long time. It would be very rare to find a Mac that cannot run 64bit binaries. Native TLS on OS X would mean that the runtime requirements for the binaries produced by DMD on OS X would be OS X 10.7 (Lion) or later. Hopefully that should not be a problem since it's several years (and versions) old. From reading the responses here, I believe the best solution is to continue to support OSX 32 bit, but as legacy support. This means folding in changes to the 64 bit support, but not adding features if they are not a natural consequence of the 64 bit support. I.e. leave the TLS support for 32 bits as is. If somebody wants to take on the task of native 32 bit TLS, I welcome that, but it's not something we need to do. There's no need for a 32 bit binary of dmd, as 32 bit only OSX machines are long gone.
Re: D and C APIs
On Tuesday, 12 January 2016 at 12:56:39 UTC, bachmeier wrote: On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. Sorry I can't offer any help, but I'm genuinely curious by what you mean in this part of your quote. If the API is changing, how does using C++, or for that matter C, help you? Sure, you can include the header directly in your program, but don't you still have to change your program? I must be missing something. In C/C++, a change to the headers causes a recompilation which will fail if there are API changes. From any other language, it'll compile, link, and fail at runtime (unless the symbols change name). If you're lucky in an obvious way. Atila
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 13:34:25 UTC, Andrei Alexandrescu wrote: Similarly, I think it would help us to release a tool in the tools/ repo that analyzes a would-be Phobos pull request and ensures it's styled the same way as most of Phobos I'm not sure if git supports this but I think it should be done fully automatically. Not even something the user runs, just when they open the pull request, it reformats the code. I'm pretty sure dfmt is up to the task in 99% of cases already.
Re: D and C APIs
On Tuesday, 12 January 2016 at 13:17:16 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 13:13 +0100, Jacob Carlborg via Digitalmars-d wrote: [...] […] [...] I tried on Debian Sid. I have both LLVM 3.6 and 3.7 installed (3.6 is still the default but I am using 3.7 to build LDC. I have yet to try on Fedora Rawhide. The downloaded DStep executable requires a link to libclang.so which does not exist on Debian Sid. There is a libclang.so.1 in both the 3.6 and 3.7 lib directories. I provided the symbolic link: libclang.so -> /usr/lib/x86_64-linux-gnu/libclang-3.7.so.1 which satisfied ldd, but on execution got a segfault. Can you link which headers you're trying to translate?
Re: D and C APIs
On Tuesday, 12 January 2016 at 13:24:48 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 11:05 +, John Colvin via Digitalmars-d wrote: […] What's so hard about writing a few function prototypes, aliases and enums? It's annoying that we have to do it, but compared to writing the rest of a project it's always going to be a tiny amount of work. I started there but gave up quite quickly as there are two levels of API here, both of which are needed to use the higher-level API as it refers directly to low-level structs and stuff. There is the kernel device drivers level, which defines the low-level API, and then there is libdvd5 which provides a (slightly) higher C API – with all the idiocies of a C API for doing high-level library programming :-( I have found a Rust wrapper of the kernel API, but that would mean writing all the libdvbv5 equivalent myself before writing the application code. There is no equivalent D version and certainly no easy way of wrapping libdvbv5 in D without it. Go has problems with C APIs and no sensible GTK3 wrapper. For a lot of projects you can only bind what you actually need, I often just pretend that I have already written the bindings then write whatever lines are necessary to get it to compile! The problem is that this is easy in C++ for a C API, but not for D or Rust using the same C API. C++ can use the C stuff directly, D and Rust need an adaptor. I agree it's easier in C++, but what I mean is literally doing something like: 1) write code pretending you've got complete bindings 2) try to compile 3) write the bare minimum bindings necessary to make it compile 4) goto 1 It's amazing how little of an API often ends up being used and therefore how little binding code you have to write. Alternatively you can write the bindings immediately when you use them, but I prefer not having to do the context switch between writing application and bindings quite as often as that.
Re: "Good PR" mechanical check
On 13/01/16 2:34 AM, Andrei Alexandrescu wrote: Related to https://github.com/D-Programming-Language/dlang.org/pull/1191: A friend who is in the GNU community told me a while ago they have a mechanical style checker that people can run against their proposed patches to make sure the patches have a style consistent with the one enforced by the GNU style. I forgot the name, something like "good-patch". I'll shoot him an email. Similarly, I think it would help us to release a tool in the tools/ repo that analyzes a would-be Phobos pull request and ensures it's styled the same way as most of Phobos: braces on their own lines, whitespace inserted a specific way, no hard tabs, etc. etc. Then people can run the tool before even submitting a PR to make sure there's no problem of that kind. Over the years I've developed some adaptability to style, and I can do Phobos' style no problem even though it wouldn't be my first preference (I favor Egyptian braces). But seeing a patchwork of styles within the same project, same file, and sometimes even the same pull request is quite jarring at least to me. Who would like to embark on writing such a tool? Thanks, Andrei dfmt + create patch compared to before and after format. No need for a whole new tool.
Re: "Good PR" mechanical check
Dne 12.1.2016 v 14:34 Andrei Alexandrescu via Digitalmars-d napsal(a): > Related to https://github.com/D-Programming-Language/dlang.org/pull/1191: > > A friend who is in the GNU community told me a while ago they have a > mechanical style checker that people can run against their proposed > patches to make sure the patches have a style consistent with the one > enforced by the GNU style. I forgot the name, something like > "good-patch". I'll shoot him an email. > > Similarly, I think it would help us to release a tool in the tools/ repo > that analyzes a would-be Phobos pull request and ensures it's styled the > same way as most of Phobos: braces on their own lines, whitespace > inserted a specific way, no hard tabs, etc. etc. Then people can run the > tool before even submitting a PR to make sure there's no problem of that > kind. > > Over the years I've developed some adaptability to style, and I can do > Phobos' style no problem even though it wouldn't be my first preference > (I favor Egyptian braces). But seeing a patchwork of styles within the > same project, same file, and sometimes even the same pull request is > quite jarring at least to me. > > Who would like to embark on writing such a tool? > > > Thanks, > > Andrei Wouldn't it be sufficient to mandate usage of dfmt with proper settings before submitting a PR? Martin smime.p7s Description: Elektronicky podpis S/MIME
"Good PR" mechanical check
Related to https://github.com/D-Programming-Language/dlang.org/pull/1191: A friend who is in the GNU community told me a while ago they have a mechanical style checker that people can run against their proposed patches to make sure the patches have a style consistent with the one enforced by the GNU style. I forgot the name, something like "good-patch". I'll shoot him an email. Similarly, I think it would help us to release a tool in the tools/ repo that analyzes a would-be Phobos pull request and ensures it's styled the same way as most of Phobos: braces on their own lines, whitespace inserted a specific way, no hard tabs, etc. etc. Then people can run the tool before even submitting a PR to make sure there's no problem of that kind. Over the years I've developed some adaptability to style, and I can do Phobos' style no problem even though it wouldn't be my first preference (I favor Egyptian braces). But seeing a patchwork of styles within the same project, same file, and sometimes even the same pull request is quite jarring at least to me. Who would like to embark on writing such a tool? Thanks, Andrei
Re: D and C APIs
On Tue, 2016-01-12 at 11:05 +, John Colvin via Digitalmars-d wrote: > […] > > What's so hard about writing a few function prototypes, aliases > and enums? It's annoying that we have to do it, but compared to > writing the rest of a project it's always going to be a tiny > amount of work. I started there but gave up quite quickly as there are two levels of API here, both of which are needed to use the higher-level API as it refers directly to low-level structs and stuff. There is the kernel device drivers level, which defines the low-level API, and then there is libdvd5 which provides a (slightly) higher C API – with all the idiocies of a C API for doing high-level library programming :-( I have found a Rust wrapper of the kernel API, but that would mean writing all the libdvbv5 equivalent myself before writing the application code. There is no equivalent D version and certainly no easy way of wrapping libdvbv5 in D without it. Go has problems with C APIs and no sensible GTK3 wrapper. > For a lot of projects you can only bind what you actually need, I > often just pretend that I have already written the bindings then > write whatever lines are necessary to get it to compile! The problem is that this is easy in C++ for a C API, but not for D or Rust using the same C API. C++ can use the C stuff directly, D and Rust need an adaptor. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: On Reggae [was D and C APIs]
On Tue, 2016-01-12 at 13:12 +, Atila Neves via Digitalmars-d wrote: > […] > You can also write build descriptions in Python with reggae BTW. Splendid. Python 3 I trust. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: D and C APIs
On Tue, 2016-01-12 at 12:56 +, bachmeier via Digitalmars-d wrote: > […] > Sorry I can't offer any help, but I'm genuinely curious by what > you mean in this part of your quote. If the API is changing, how > does using C++, or for that matter C, help you? Sure, you can > include the header directly in your program, but don't you still > have to change your program? I must be missing something. There are two APIs, the kernel/device driver and the library over it. libdvbv5 is trying to provide an unchanging adaptor and in many ways succeeds admirably (as long as you are happy with all the dreadful low- level things you have to do to do input/output and map/dictionaries). The kernel/device driver API changes as it needs to without worrying about backward compatibility, leaving consistency and compatibility to libdvbv5. However libdvbv5 exposes some symbols and structures (that don't actually change that much) directly in the libdvbv5 API. Thus you actually have to wrap both APIs to do anything useful. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Proposal: Database Engine for D
On Tuesday, 12 January 2016 at 12:53:29 UTC, w0rp wrote: I've played with the idea of using operator overloading for some kind of ORM before, but I don't think it's strictly necessary to use operator overloading for an ORM at all. Maybe in some cases it might make sense. The question is whether we want to provide a comparable experience to other languages or not. What is desired depends on what other languages provide for a particular application area? The ability to create abstractions is important for a language like D since that is supposed to be a primary feature.
Re: D and C APIs
On Tue, 2016-01-12 at 13:13 +0100, Jacob Carlborg via Digitalmars-d wrote: > […] > I assume you mean LLVM. Have you tried one from here [1]. Should > work > with LLVM 3.1 to 3.5 (at least). This is the matrix of Clang > versions > that I use for testing [2]. > > > [1] http://llvm.org/releases/ > [2] https://github.com/jacob-carlborg/dstep/blob/master/test.d#L233 I tried on Debian Sid. I have both LLVM 3.6 and 3.7 installed (3.6 is still the default but I am using 3.7 to build LDC. I have yet to try on Fedora Rawhide. The downloaded DStep executable requires a link to libclang.so which does not exist on Debian Sid. There is a libclang.so.1 in both the 3.6 and 3.7 lib directories. I provided the symbolic link: libclang.so -> /usr/lib/x86_64-linux-gnu/libclang-3.7.so.1 which satisfied ldd, but on execution got a segfault. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: D and C APIs
On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 08:12 +, Atila Neves via Digitalmars-d wrote: On Monday, 11 January 2016 at 17:25:26 UTC, Russel Winder wrote: > [...] This is the kind of thing I wrote reggae for. CMake is an alternative, but I'd rather write D than CMake script. CMake scripts are hideous in that the language is like nothing else, other than perhaps m4 macros. They should have used Lisp. Or Python. You can also write build descriptions in Python with reggae BTW. Atila
Re: D and C APIs
On Tuesday, 12 January 2016 at 11:05:38 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 08:12 +, Atila Neves via Digitalmars-d wrote: On Monday, 11 January 2016 at 17:25:26 UTC, Russel Winder wrote: > [...] This is the kind of thing I wrote reggae for. CMake is an alternative, but I'd rather write D than CMake script. CMake scripts are hideous in that the language is like nothing else, other than perhaps m4 macros. They should have used Lisp. Or Python. I must try Reggae at some stage, but for now I need to progress this Me TV rewrite. D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. What's so hard about writing a few function prototypes, aliases and enums? It's annoying that we have to do it, but compared to writing the rest of a project it's always going to be a tiny amount of work. What's hard is that the function prototypes usually use/need: . macros . struct definitions Without a C preprocessor it's extremely hard to bind to any non-toy C API. It's the only reason I'd consider using C++ instead of D. But only after trying DStep first. Atila
Re: D and C APIs
On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. Sorry I can't offer any help, but I'm genuinely curious by what you mean in this part of your quote. If the API is changing, how does using C++, or for that matter C, help you? Sure, you can include the header directly in your program, but don't you still have to change your program? I must be missing something.
Re: Proposal: Database Engine for D
I've played with the idea of using operator overloading for some kind of ORM before, but I don't think it's strictly necessary to use operator overloading for an ORM at all. Maybe in some cases it might make sense. I don't think the answer for building such a thing is to think of one idea, find out it won't work out well, and then give up. You have to be more creative than that.
Re: D and C APIs
On 2016-01-12 13:13, Jacob Carlborg wrote: I assume you mean LLVM. Have you tried one from here [1]. I use the Ubuntu releases to test on Debian 7 (64bit) and 6 (32bit). -- /Jacob Carlborg
Re: Tutorials section on vibed.org
On Tuesday, 5 January 2016 at 06:42:25 UTC, Sönke Ludwig wrote: Am 05.01.2016 um 05:19 schrieb Charles: On Monday, 4 January 2016 at 18:42:32 UTC, Sönke Ludwig wrote: Am 04.01.2016 um 19:04 schrieb Pradeep Gowda: On Monday, 4 January 2016 at 14:31:21 UTC, Sönke Ludwig wrote: Added! The footer of the website still says 2012-2014. Please fix that! Fixed, thanks! Looks like this on my phone (ignore the volume overlay, screenshot key is dumb): http://imgur.com/ynuZUpq I'll fix that next. The media queries are still primitive ATM. Please return black theme. It's become much harder to read site...
Re: D and C APIs
On 2016-01-12 11:39, Russel Winder via Digitalmars-d wrote: I tried downloading pre-built Linux DStep, but it requires an .so link that doesn't exist on Debian Sid or Fedora Rawhide. I hacked something and DStep segfaulted. I assume you mean LLVM. Have you tried one from here [1]. Should work with LLVM 3.1 to 3.5 (at least). This is the matrix of Clang versions that I use for testing [2]. [1] http://llvm.org/releases/ [2] https://github.com/jacob-carlborg/dstep/blob/master/test.d#L233 -- /Jacob Carlborg
Re: D and C APIs
On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 08:12 +, Atila Neves via Digitalmars-d wrote: On Monday, 11 January 2016 at 17:25:26 UTC, Russel Winder wrote: > I am guessing that people have an answer to this: > > D making use of a C API needs a D module adapter. This can > either be constructed by hand (well it can, but…), or it can > be auto generated from the C header files and then hand > massaged (likely far better). I think the only tool for this > on Linux is DStep. > > This is all very well for a static unchanging API, but what > about C APIs that are generated from elsewhere? This > requires constant update of the D modules. Do people just do > this by hand? > > Is the pain of creating a V4L D module set worth the effort > rather than just suffering the pain of writing in C++? This is the kind of thing I wrote reggae for. CMake is an alternative, but I'd rather write D than CMake script. CMake scripts are hideous in that the language is like nothing else, other than perhaps m4 macros. They should have used Lisp. Or Python. I must try Reggae at some stage, but for now I need to progress this Me TV rewrite. D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. What's so hard about writing a few function prototypes, aliases and enums? It's annoying that we have to do it, but compared to writing the rest of a project it's always going to be a tiny amount of work. For a lot of projects you can only bind what you actually need, I often just pretend that I have already written the bindings then write whatever lines are necessary to get it to compile!
Re: D and C APIs
On Tue, 2016-01-12 at 08:12 +, Atila Neves via Digitalmars-d wrote: > On Monday, 11 January 2016 at 17:25:26 UTC, Russel Winder wrote: > > I am guessing that people have an answer to this: > > > > D making use of a C API needs a D module adapter. This can > > either be constructed by hand (well it can, but…), or it can be > > auto generated from the C header files and then hand massaged > > (likely far better). I think the only tool for this on Linux is > > DStep. > > > > This is all very well for a static unchanging API, but what > > about C APIs that are generated from elsewhere? This requires > > constant update of the D modules. Do people just do this by > > hand? > > > > Is the pain of creating a V4L D module set worth the effort > > rather than just suffering the pain of writing in C++? > > This is the kind of thing I wrote reggae for. CMake is an > alternative, but I'd rather write D than CMake script. CMake scripts are hideous in that the language is like nothing else, other than perhaps m4 macros. They should have used Lisp. Or Python. I must try Reggae at some stage, but for now I need to progress this Me TV rewrite. D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. Also using CMake and C++ I can use CLion, which seems to be far outstripping Eclipse/CDT and Netbeans as a C++ IDE. I still really dislike CMake scripts though. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: D and C APIs
On Mon, 2016-01-11 at 21:42 +, stew via Digitalmars-d wrote: > […] > At work we use CMake and have a target for this. The DStep target > is invoked whenever the C headers change. We also use SWIG this > way. Both tools often require some hand-massaging though. […] I tried downloading pre-built Linux DStep, but it requires an .so link that doesn't exist on Debian Sid or Fedora Rawhide. I hacked something and DStep segfaulted. Given the long list of dependencies that aren't seemingly easily installed on Debian Sid or Fedora Rawhide, I am hesitant to build from source. So as to progress the project at all at this stage I am using C++. Sad, but pragmatic. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part