On Thursday, 3 April 2014 at 21:06:52 UTC, Andrej Mitrovic wrote:
On 4/3/14, Bill Buckels <[email protected]> wrote:
D Compiler for .NET -- Compiles the code to Common Intermediate
Language (CIL) bytecode rather than to machine code. The CIL can
then be run via a Common Language Infrastructure (CLR) virtual
machine

This seems out of place? What about D for .NET?

That was really my question:) If D was to have the sfloat24 built-in data type, to what extent would that affect using an interface layer like .NET?

Or does anybody use D in .NET? Does anyone care what Microsoft does with their layers? How about IOs? Anything precise happening in D over there? OSX?

Would the availability of sfloat24 in D expand the use of D in the .NET environment? Or for that matter any environment? Arduino? Raspberry Pi? Bluetooth? Anyone doing FPGA in D on some new contraption that isn't built yet?

Exactly what are your views on sfloat24 after reading the papers? Rsik versus Reward for any language like D in this case that took a giant leap of faith and decided to provide support for sfloat24?

Is this just something that electrical engineers are going to use doing experimental programming or has sfloat24 some practical merit that would make it desirable as a built-in data type for the D community.

I have none of these answers. I know the group advocating sfloat24 somewhat, and they believe strongly in this data type. I told one of the fellows that I would ask other programmers if they saw a need for sfloat24.

Early acceptors like D programmers likely have substantially more vision than complacent old C programmers like me.

Double precision has always worked for me in the C language over my last 30 years or so. However I don't do the kind of precision that they do. I also don't program small processors and haven't worried about running out of memory since CP/M, the Apple II and MS-DOS. I haven't worried much about speed in floating point calculations since intel started including a floating point co-processor in their CPU. But it was a real pain, back in the day, to wait for a double precision calculation to complete when we needed to link C with an floating point emulation library for folks who had no co-pros. I could've used a smaller quicker more precise double on those little boxes.

The banking software I wrote back then runs after-hours and nobody much cared if it was COBOL or C++ back then... is it still the same job market today for you even in D? Or does the bank just add a couple more blade servers when things bog-down? Do programmers still bury rounding errors in the largest number?

With the prevalence of blue-tooth and embedded systems today, are there any D programmers who are working in those environments. Or is most of the world like me, perfectly content to sit on a Windows or a Linux box, and just use the stuff that comes with the compiler.

Frankly, the only way I can tell the difference between a program compiled with MinGW, and with Microsoft C, is that the MinGW program is smaller. It doesn't seem faster, and since both map to the same Windows calls, maybe it isn't faster.

So does it work the same way in D?

When I used the .NET layer for years, I couldn't tell the difference in speed between VB.NET and C#. I couldn't notice any difference in Windows Mobile on a ARM processor either.

As far as Linux, whether it wws c or C++, or even the Qt applications I worked on, or even in earlier times using gcc on an IBM360 or whatever when I did 'em all, everything ended-up about the same, so is this more something a compiler might implement independent of any layer at all, and optimize internally based on data type?

If so, does anyone want it besides scientists? Where's the use case in D? If any?

Questions of that nature...

Also is anyone working on a trajectory calculation for a lunar landing in D? My friend Jack Crenshaw is with one of the google ranger groups... but I don't get out much so I don't know what other people do anymore:)

So I thought I should ask.

Bill

Reply via email to