On Wednesday, 20 December 2017 at 17:31:20 UTC, Ali Çehreli wrote:
On 12/20/2017 05:41 AM, Vino wrote:

> auto TL = dFiles.length;
> auto TP = new TaskPool(TL);

I assume dFiles is large. So, that's a lot of threads there.

> foreach (d; TP.parallel(dFiles[],1))

You tried with larger work unit sizes, right? More importantly, I think all these threads are working on the same disk. If the access is serialized by the OS or a lower entity, then all threads necessarily wait for each other, making the whole exercise serial.

> auto SdFiles = Array!ulong(dirEntries(d,
SpanMode.depth).map!(a =>
> a.size).fold!((a,b) => a + b) (x))[].filter!(a => a  > Size);
> Thread.sleep(5.seconds);

You don't need that at all. I had left it in there just to give me a chance to examine the number of threads the program was using.

Ali

Hi Ali,

Below are the answers.

"I think all these threads are working on the same disk. If the access is serialized by the OS or a lower entity, then all threads necessarily wait for each other, making the whole exercise serial."

The File system that is used here to scan and find the folder size is an NetApp File system mapped on Windows 2008. The file system is exported using NFS v3 so you are right that the disk access is serialized.

The no of folders are from 2 NetApp file system and no of folders in each file system is as below

File system 1 : 76 folders and File system 2: 77 folders.

You don't need that at all. I had left it in there just to give me a chance to examine the number of threads the program was using.

We have not update your main code yet, it was a test that we performed on test server.

From,
Vino.B

Reply via email to