On Thursday, 23 January 2014 at 22:39:38 UTC, Vladimir Panteleev wrote:
Well, even C with D's metaprogramming would be a big win. But picking-and-choosing runtime-supported features would be even better.

Quite right! I have actually delayed implementing classes a little bit because I found I could do so much with just structs, mixins, and templates. In fact, if there weren't so many types in the runtime implemented as classes, I could delay it even further.

I noticed you're using GNU make. On Windows, D uses Digital Mars make, which is quite minimalistic. I've considered writing a custom build tool for SlimD, but now that I think about it, I think GNU make will suffice.

Using GNU make was really an arbitrary decision for me. I really don't like GNU make and hate my own makefiles even more. Personally, I would prefer to write scripts in D, so I could automate whatever I wanted including builds.

My project, however, is just an exercise. I expect to go through this exercise a couple times and fine tune what I want. Then I'll make a real repository with just the good stuff :-)

So, you think it would be better to make the standard D runtime more customizable in this regard, instead of putting together a new one?

At the moment, I think throwing it all away and starting from scratch is the best approach, especially with the "minimal/modular" mindset, but I would like to get to a point where the two could be integrated (either I join D, or D joins me). I would prefer not to fragment D into "D", "D for embedded", "D for smartphones", "D for tablets", etc.. I think if the D runtime were modular and designed with platform ubiquity in mind, it would simply require a customized makefile like you've already alluded to, or something other approach (see my comments below)

What's wrong with size_t?

There's nothing wrong with the concept of size_t. The problem is the "_t" suffix. That is C's baggage, and is inconsistent with other types in D. D doesn't use string_t, uint_t, etc..., so I don't see why this convention is being employed in D.


There was an interesting discussion, recently, that might relevant to this. I encourage you to check it out and maybe post your thoughts [5]. It's more about the required TypeInfo stuff, but nevertheless related to the -betterC idea.

Well, as long as the linker throws away all compiler-emitted TypeInfo objects if your code doesn't use them, there doesn't seem to be any practical problem with them, is there?

Yeah, that's the conclusion I eventually came to. It was just a major inconvenience for me to have to spend hours studying and implementing all of this TypeInfo stuff, and all the stuff the TypeInfo uses, only to have it thrown away at link time. It was a huge waste of effort, but all that TypeInfo stuff may be needed in the end anyway, so I'm holding back too much harsh judgement at the moment.

ModuleInfo IS a problem, though. It is like a parasite of interdependence: it points towards all class information (for Object.factory), which itself pulls in all virtual methods of all classes in your program (through the Vtable). That means that anything that any virtual method in your program uses, regardless if you use that class or not, WILL end up in the executable. Ugh!

That sounds very undesirable. I still don't even understand what purpose modules and ModuleInfo really serve. Right now, I'm just using modules for namespace scope and encapsulation. If you know some documentation that helps demystify ModuleInfo and what its purpose is (besides the source code) please point me to it.

Maybe we could join efforts, even though our goals target different platforms :)

What do you say to this repository structure (inspired by the Windows DDK):

Users start a new SlimD (or whatever name) project by adding a Makefile to their project directory. The makefile configures the project's requirements (target platform, used D features, linker of choice) and source files (*.d by default), and includes the main SlimD makefile, which does all the work based on the given configuration.

This is very much like what I had in mind, and I like it. However, I'm considering other methods, too.

1) Features/platforms can simply be provide at link time by adding a certain *.a or *.o files to the linker command. And if a needed feature is missing, linker errors will result.

For example, exception handling could be provide with seh.o, eh.o, sjlj.o, or no_eh.o depending on which model the user wanted to use. (This example may not make any sense, as I know nothing about how to implement exception handling. I just couldn't think of better illustration of my thoughts).

2) Templates can be used to provide modularity/specialization in one's source code at compile time. There's a C++ graphics library [1] that uses this approach to create platform independent rendering pipelines. For example:

//pseudo code
typedef color_t
typedef pixel_t
typedef path_t
typedef surface<pixel_t> surface_t
typedef rasterizer<path_t> rasterizer_t
typedef renderer<rasterizer_t, surface_t> renderer_t.

//Then draw like this
renderer_t r;
r.DrawLine()
r.DrawCircle()
...

So, surface_t could be defined as...

typedef surface<DirectX11> surface_t
typedef surface<OpenGL> surface_t
typedef surface<X11> surface_t

... to provide some platform specialization at compile time, and the user of the code doesn't know the difference.

This is a pretty cool way to build a graphics library, and something along these lines could be used to provide some specialization to the D runtime too. User's would simply have to include a certain *.di file containing the required templated aliases/typedefs for a given platform. If user's want to port to a different platform, or choose a different feature subset, they just create a different set of templated aliases/typedefs.

I'm brainstorming as I write this, so sorry for being so verbose. As you can see I don't have any opinion yet. Perhaps you could provide your opinion.

Nevertheless, your proposed solution does basically what I'm looking for, and would be happy to help in any way I can.

Mike

[1] - http://www.antigrain.com/

Reply via email to