On Wednesday, 18 September 2013 at 13:39:29 UTC, bearophile wrote:
Just for a test, try to allocate all those arrays in a different way:
- First try a std.array.Array using the LDC2 compiler;
- Another thing to try is to allocate them on the stack using core.stdc.stdlib.alloca:
auto p = cast(T*)alloca(T.sizeof * g.vertexCount);
if (!p) throw new MemoryError...
auto centrality = p[0 .. g.vertexCount];
This probably can't be used for very large graphs, so you have to add even more logic to allocate the arrays on the C heap if they are too much large. This could be useful if you call betweenness() many many times.
- Try to optionally accept the buffers from outside.

Nice suggestions, I'll try those out! :-)

I think I did give std.array.Array a trial when trying to speed up its performance, and I don't remember it making any difference (if anything it may have slowed things down). But I'll give it a second look and report back.

I haven't yet tried alloca or other manual memory management -- I felt a bit resistant to this as I'd prefer to keep the code simple and readable -- but I'll give that a go too just to see how it goes.

It's interesting that you pick up on my use of to!T(0) etc. I had some concern about whether that would affect speed at all. At the time of originally writing the code, my impression was that not having it actually made things worse, and that the compiler was smart enough to carry out the conversion at compile-time.

However, my impression now is that having or not having it, or any other method (e.g. enum T0, T1, etc.) makes absolutely no difference, and that I might as well be simple and write 0 for integral-types, 0.0 for floating-point.

Reply via email to