On Tuesday, 5 September 2017 at 09:44:09 UTC, Vino.B wrote:
Hi,

The below code is consume more memory and slower can you provide your suggestion on how to over come these issues.

You can start by dropping the .array conversions after dirEntries. That way your algorithm will become lazy (as opposed to eager), meaning that it won't allocate an entire array of DirEntry[]. It will, instead, treat the DirEntries one at a time, resulting in less memory consumption.

I didn't understand the join(["\\\\?\\", d[0]]) part, maybe you meant to write join("\\\\?\\", d[0]) ?

If appender is too slow, you can experiment with a dynamic array whose capacity was preallocated : string[][] Subdata; Subdata.reserve(10000); In this case Subdata will hold enough space for 10000 string[]s, which will result in better performance.

Here's the updated code (sans .array) in case any one wants to reproduce the issue :

import std.stdio;
import std.conv;
import std.typecons;
import std.array;
import std.path;
import std.container;
import std.file;
import std.parallelism;
import std.algorithm;

void main()
{
        ".".csizeDirList(1024).each!writeln;
}

string[][] csizeDirList (string FFs, int SizeDir) {
        ulong subdirTotal = 0;
        ulong subdirTotalGB;
        auto Subdata = appender!(string[][]);
        auto dFiles = dirEntries(FFs, SpanMode.shallow)
                .filter!(a => a.isDir && !globMatch(a.baseName, "*DND*"))
                .map!(a => tuple(a.name, a.size));

        foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["\\\\?\\", d[0]]), SpanMode.depth)
                        .map!(a => tuple(a.size));
                foreach (f; parallel(SdFiles,1)) {
                        subdirTotal += f[0];
                }
                subdirTotalGB = (subdirTotal/1024/1024);
                if (subdirTotalGB > SizeDir) {
                        Subdata ~= [d[0], to!string(subdirTotalGB)];
                }
                subdirTotal = 0;
        }
        return Subdata.data;
}

Reply via email to