Re: Performance issue with fiber

2021-07-30 Thread hanabi1224 via Digitalmars-d-learn

On Friday, 30 July 2021 at 14:41:06 UTC, Daniel Kozak wrote:

I have rewrite it to be same as dart version


Thanks! There're both generator version and fiber version on the 
site(if possible), the 2 versions are not really comparable to 
each other (generator solutions should be much faster). There's 
another dart implementation with Isolate 
[here](https://github.com/hanabi1224/Programming-Language-Benchmarks/blob/main/bench/algorithm/coro-prime-sieve/2.dart), it's unlisted because of very bad performance. (Isolate is the closest thing in dart to thread or fiber but it's much much more expensive to even spawn)


I'd like to list D's generator solution but please note that it's 
only comparable to the kotlin/c#/python generator solutions while 
the fiber one is still a separate issue.


Re: Performance issue with fiber

2021-07-30 Thread Daniel Kozak via Digitalmars-d-learn
On Wed, Jul 28, 2021 at 11:41 PM hanabi1224 via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com> wrote:

> On Wednesday, 28 July 2021 at 16:26:49 UTC, drug wrote:
> > I profiled the provided example (not `FiberScheduler`) using
> > perf. Both dmd and ldc2 gave the same result - `void
> > filterInner(int, int)` took ~90% of the run time. The time was
> > divided between:
> >   `int std.concurrency.receiveOnly!(int).receiveOnly()` - 58%
> >   `void std.concurrency.send!(int).send(std.concurrency.Tid,
> > int)` - 31%
> >
> > So most of the time is messages passing.
> >
> > Between the fibers creating took very few time. Perf output
> > contains information only of `void
> > std.concurrency.FiberScheduler.create(void delegate()).wrap()`
> > which took less than 0.5%. But I wouldn't say that I did the
> > profiling ideally so take it with a grain of salt.
>
> Very interesting findings! After making the Fiber fix, I also
> made profiling with valgrind, the result shows MessageBox related
> staff contributes to ~13.7% of total cycles, swapContex related
> staff add up to a larger percentage (My rough estimation is
>  >50%), I'd like to share the result svg but did not figure out
> how to upload here.
>

I have rewrite it to be same as dart version

import std;

void main(string[] args) {
auto n = args.length > 1 ? args[1].to!int() : 5;

auto r = new Generator!int(
{
for(auto i = 2;;i++)
yield(i);
});

for(auto i=0;i

Re: Performance issue with fiber

2021-07-28 Thread hanabi1224 via Digitalmars-d-learn

On Wednesday, 28 July 2021 at 16:26:49 UTC, drug wrote:
I profiled the provided example (not `FiberScheduler`) using 
perf. Both dmd and ldc2 gave the same result - `void 
filterInner(int, int)` took ~90% of the run time. The time was 
divided between:

`int std.concurrency.receiveOnly!(int).receiveOnly()` - 58%
	`void std.concurrency.send!(int).send(std.concurrency.Tid, 
int)` - 31%


So most of the time is messages passing.

Between the fibers creating took very few time. Perf output 
contains information only of `void 
std.concurrency.FiberScheduler.create(void delegate()).wrap()` 
which took less than 0.5%. But I wouldn't say that I did the 
profiling ideally so take it with a grain of salt.


Very interesting findings! After making the Fiber fix, I also 
made profiling with valgrind, the result shows MessageBox related 
staff contributes to ~13.7% of total cycles, swapContex related 
staff add up to a larger percentage (My rough estimation is 
>50%), I'd like to share the result svg but did not figure out 
how to upload here.


Re: Performance issue with fiber

2021-07-28 Thread hanabi1224 via Digitalmars-d-learn

On Wednesday, 28 July 2021 at 16:31:49 UTC, Ali Çehreli wrote:
I assume the opposite because normally, the number of times a 
thread or fiber is spawned is nothing compared to the number of 
times they are context-switched. So, spawning can be expensive 
and nobody would realize as long as switching is cheap.


You are right, but that's not my point. whether fiber spawning is 
expensive should be compared to thread, and it should be much 
less expensive, ppl can expect to create much more fibers at the 
same time than system thread, even if it's stackful (that should 
mostly contribute to heap memory usage, fiber stack size should 
not be a perf bottleneck before running out of mem). And when 
analyzing perf issues with fiber, it's not a valid reason to me 
that 'fiber is expensive' because fiber itself is the solution to 
the expensiveness of thread, and non of the other fiber 
implementations in other languages/runtime have the same issue 
with the same test case.




Re: Performance issue with fiber

2021-07-28 Thread hanabi1224 via Digitalmars-d-learn

On Wednesday, 28 July 2021 at 14:39:29 UTC, Mathias LANG wrote:

Hence doing:
```diff
- auto scheduler = new FiberScheduler();
+ scheduler = new FiberScheduler();
```


Thanks for pointing it out! Looks like I was benchmarking thread 
instead of fiber. I just made the change you suggest but the 
result is very similar, that being said, using system thread or 
fiber does not make any obvious difference in this test case, 
this fact itself seems problematic, fiber should be much faster 
than system thread in this test case (as I have proved for many 
other langs with the same case, I published results 
[here](https://programming-language-benchmarks.vercel.app/problem/coro-prime-sieve) but note that not all of them are implemented with stackful coroutine), unless there's some defect in D's current fiber implementation.


Re: Performance issue with fiber

2021-07-28 Thread Ali Çehreli via Digitalmars-d-learn

On 7/28/21 1:15 AM, hanabi1224 wrote:

> On Wednesday, 28 July 2021 at 01:12:16 UTC, Denis Feklushkin wrote:
>> Spawning fiber is expensive
>
> Sorry but I cannot agree with the logic behind this statement, the whole
> point of using fiber is that, spwaning system thread is expensive, thus
> ppl create lightweight thread 'fiber'

I assume the opposite because normally, the number of times a thread or 
fiber is spawned is nothing compared to the number of times they are 
context-switched. So, spawning can be expensive and nobody would realize 
as long as switching is cheap.


There are other reasons why fibers are faster than threads all related 
to context switching:


- CPU cache efficiency

- Translation lookaside buffer (TLB) efficiency

- Holding on to the entirety of the time slice given by the OS

Ali

P.S. The little I know on these topics is included in this presentation:

  https://dconf.org/2016/talks/cehreli.html



Re: Performance issue with fiber

2021-07-28 Thread drug via Digitalmars-d-learn

28.07.2021 17:39, Mathias LANG пишет:

On Wednesday, 21 July 2021 at 22:51:38 UTC, hanabi1224 wrote:
Hi, I'm new to D lang and encounter some performance issues with 
fiber, not sure if there's something obviously wrong with my code.


I took a quick look, and the first problem I saw was that you were using 
`spawnLinked` but not replacing the scheduler.

`std.concurrency` uses a global `scheduler` variable to do its job.
Hence doing:
```diff
- auto scheduler = new FiberScheduler();
+ scheduler = new FiberScheduler();
```

Will ensure that `spawnLinked` works as expected.
There are a few other things to consider, w.r.t. fibers:
- Our Fibers are 4 pages (on Linux) by default;
- We have an extra guard page, because we are a native language, so we 
can't do the same trick as Go to auto-grow the stack;
- Spawning fibers *is* expensive and other languages reuse fibers (Yes, 
Go recycle them);


The `FiberScheduler` implementation is unfortunately pretty bad: it 
[does not re-use 
fibers](https://github.com/dlang/phobos/blob/b48cca57e8ad2dc56872499836bfa1e70e390abb/std/concurrency.d#L1578-L1599). 
I believe this is the core of the issue.


I profiled the provided example (not `FiberScheduler`) using perf. Both 
dmd and ldc2 gave the same result - `void filterInner(int, int)` took 
~90% of the run time. The time was divided between:

`int std.concurrency.receiveOnly!(int).receiveOnly()` - 58%
`void std.concurrency.send!(int).send(std.concurrency.Tid, int)` - 31%

So most of the time is messages passing.

Between the fibers creating took very few time. Perf output contains 
information only of `void std.concurrency.FiberScheduler.create(void 
delegate()).wrap()` which took less than 0.5%. But I wouldn't say that I 
did the profiling ideally so take it with a grain of salt.


Re: Performance issue with fiber

2021-07-28 Thread Mathias LANG via Digitalmars-d-learn

On Wednesday, 21 July 2021 at 22:51:38 UTC, hanabi1224 wrote:
Hi, I'm new to D lang and encounter some performance issues 
with fiber, not sure if there's something obviously wrong with 
my code.


I took a quick look, and the first problem I saw was that you 
were using `spawnLinked` but not replacing the scheduler.
`std.concurrency` uses a global `scheduler` variable to do its 
job.

Hence doing:
```diff
- auto scheduler = new FiberScheduler();
+ scheduler = new FiberScheduler();
```

Will ensure that `spawnLinked` works as expected.
There are a few other things to consider, w.r.t. fibers:
- Our Fibers are 4 pages (on Linux) by default;
- We have an extra guard page, because we are a native language, 
so we can't do the same trick as Go to auto-grow the stack;
- Spawning fibers *is* expensive and other languages reuse fibers 
(Yes, Go recycle them);


The `FiberScheduler` implementation is unfortunately pretty bad: 
it [does not re-use 
fibers](https://github.com/dlang/phobos/blob/b48cca57e8ad2dc56872499836bfa1e70e390abb/std/concurrency.d#L1578-L1599). I believe this is the core of the issue.


Re: Performance issue with fiber

2021-07-28 Thread James Blachly via Digitalmars-d-learn

On 7/27/21 9:12 PM, Denis Feklushkin wrote:
Spawning fiber is expensive (but not so expensive as spawning thread, of 
course), but switching is fast.


Thus, you can spawn and pause "workers" fibers for avaiting of jobs.
(Probably, this behaviour is already implemented in number of libraries 
and it isn't actually need to implement another one.)


Agree with sibling comment by hanabi1224: spawning fiber (in other 
language runtimes) is INCREDIBLY fast, even though they have stack.


Something is wrong here.


Re: Performance issue with fiber

2021-07-28 Thread hanabi1224 via Digitalmars-d-learn
On Wednesday, 28 July 2021 at 01:12:16 UTC, Denis Feklushkin 
wrote:

Spawning fiber is expensive


Sorry but I cannot agree with the logic behind this statement, 
the whole point of using fiber is that, spwaning system thread is 
expensive, thus ppl create lightweight thread 'fiber', then if a 
'fiber' is still considered expensive, should ppl invent 'giber' 
on top of 'fiber', then 'hiber' on top of 'giber'? That will be 
infinite.


If you mean 'each fiber has its own stack' by 'expensive', to be 
fair, golang's goroutine is stackful (4KB stack for each 
goroutine), it's like >50X faster with the same test case.


Re: Performance issue with fiber

2021-07-27 Thread Denis Feklushkin via Digitalmars-d-learn

On Monday, 26 July 2021 at 12:09:07 UTC, hanabi1224 wrote:

Thank you for your response! I've got some questions tho.

On Saturday, 24 July 2021 at 09:17:47 UTC, Stefan Koch wrote:


It will not use a fiber pool.


Why fiber pool? Isn't fiber a lightweight logical thread which 
is already implemented with thread pool internally?


Spawning fiber is expensive (but not so expensive as spawning 
thread, of course), but switching is fast.


Thus, you can spawn and pause "workers" fibers for avaiting of 
jobs.
(Probably, this behaviour is already implemented in number of 
libraries and it isn't actually need to implement another one.)


Re: Performance issue with fiber

2021-07-26 Thread jfondren via Digitalmars-d-learn

On Monday, 26 July 2021 at 15:27:48 UTC, russhy wrote:

```

build:

```
dub build --compiler=ldc -brelease --single primesv1.d
```




-brelease is a typo issue, i don't think that produce defired 
effect, most likely it defaulted to debug build


it should be -b release


No, it builds a release build with -brelease. Try it yourself, 
the first line of output tells you what the build type is. 
Instead of interpreting -abc as -a -b -c like getopt, dub 
interprets it as -a bc


--arch/-a works the same way, and although I don't see this usage 
in the official documentation, I didn't make it up:


https://duckduckgo.com/?q=dub+%22brelease%22=1


Re: Performance issue with fiber

2021-07-26 Thread russhy via Digitalmars-d-learn

```

build:

```
dub build --compiler=ldc -brelease --single primesv1.d
```




-brelease is a typo issue, i don't think that produce defired 
effect, most likely it defaulted to debug build


it should be -b release




Re: Performance issue with fiber

2021-07-26 Thread hanabi1224 via Digitalmars-d-learn

Thank you for your response! I've got some questions tho.

On Saturday, 24 July 2021 at 09:17:47 UTC, Stefan Koch wrote:


It will not use a fiber pool.


Why fiber pool? Isn't fiber a lightweight logical thread which is 
already implemented with thread pool internally?


Spawning a new fiber is expensive because of the stack 
allocation for it.


Actually, I've benchmarked many stackful coroutine 
implementations in different langs but they're all much faster, 
and BTW, AFAIK go's goroutine is stackful as well (4KB IIRC)


Re: Performance issue with fiber

2021-07-24 Thread jfondren via Digitalmars-d-learn

On Saturday, 24 July 2021 at 09:17:47 UTC, Stefan Koch wrote:

On Wednesday, 21 July 2021 at 22:51:38 UTC, hanabi1224 wrote:
Hi, I'm new to D lang and encounter some performance issues 
with fiber, not sure if there's something obviously wrong with 
my code.




There is your problem.

auto scheduler = new FiberScheduler;


The Fiber scheduler will spawn a new fiber for every job.
It will not use a fiber pool. Spawning a new fiber is expensive 
because of the stack allocation for it.
Also if I recall correctly it will run single-threaded but I am 
not 100% sure on that.
Just have a look at the running processes ... if you just see 
one than you are single threaded.


I get 8->3 seconds using vibe's fiber scheduler, which still 
isn't competitive with Elixir.


```d
--- primes.d2021-07-24 21:37:46.633053839 -0500
+++ primesv1.d  2021-07-24 21:35:50.843053425 -0500
@@ -1,16 +1,19 @@
 /++ dub.sdl:
+dependency "vibe-core" version="~>1.16.0"
  +/
-import std;
-import core.stdc.stdlib : exit;
+import std.stdio, std.conv;
+import vibe.core.concurrency;

 __gshared Tid mainTid;
 __gshared bool terminated = false;

 const int mailBoxSize = 1;

+extern(C) void _exit(int status);
+
 void main(string[] args) {
 auto n = args.length > 1 ? args[1].to!int() : 10;
-auto scheduler = new FiberScheduler;
+setConcurrencyPrimitive(ConcurrencyPrimitive.workerTask);
 scheduler.start({
 mainTid = thisTid();
 setMaxMailboxSize(mainTid, n, OnCrowding.throwException);
@@ -22,7 +25,7 @@
 writeln(prime);
 }
 terminated = true;
-exit(0);
+_exit(0);
 });
 }
```

build:

```
dub build --compiler=ldc -brelease --single primesv1.d
```

I think this is just a very goroutine-friendly test that relies 
on constantly spawning fibers and abusing message-passing rather 
than architecting out the concurrent parts of your program and 
how they should communicate. std.parallel's more appropriate in D.


Re: Performance issue with fiber

2021-07-24 Thread Stefan Koch via Digitalmars-d-learn

On Wednesday, 21 July 2021 at 22:51:38 UTC, hanabi1224 wrote:
Hi, I'm new to D lang and encounter some performance issues 
with fiber, not sure if there's something obviously wrong with 
my code.




There is your problem.

auto scheduler = new FiberScheduler;


The Fiber scheduler will spawn a new fiber for every job.
It will not use a fiber pool. Spawning a new fiber is expensive 
because of the stack allocation for it.
Also if I recall correctly it will run single-threaded but I am 
not 100% sure on that.
Just have a look at the running processes ... if you just see one 
than you are single threaded.




Re: Performance issue with fiber

2021-07-21 Thread seany via Digitalmars-d-learn

On Wednesday, 21 July 2021 at 22:51:38 UTC, hanabi1224 wrote:
Hi, I'm new to D lang and encounter some performance issues 
with fiber, not sure if there's something obviously wrong with 
my code.


[...]


Following.

I am also in need of more information to increase speed of D 
binaries using parallel code.


Re: Performance Issue

2017-09-07 Thread Vino.B via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 18:44:26 UTC, Azi Hassan wrote:
On Wednesday, 6 September 2017 at 18:21:44 UTC, Azi Hassan 
wrote:
I tried to create a similar file structure on my Linux 
machine. Here's the result of ls -R TEST1:


TEST1:
BACKUP
...


Upon further inspection it looks like I messed up the output.

[31460]  - Array 1 for folder 1(all files in Folder 1) of the 
FS C:\\Temp\\TEST1\\BACKUP
[138]  - Array 2 for folder 2(all files in Folder 2) of 
the FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663]  - Array 3 for folder 1(all files in Folder 
1) of the FS C:\\Temp\\TEST2\\EXPOR
[31460] - Array 4 for folder 2(all files in Folder 2) the FS 
C:\\Temp\\TEST2\\EXPORT


What files do these sizes correspond to ? Shouldn't there be 
two elements in the first array because 
C:\Temp\TEST1\BACKUP\FOLDER1 contains two files ?


Hi Azi,

 Was able to implement "fold", below is the update code, 
regarding container array, I have almost completed my 
program(Release 1), so it is not a good idea to convert the 
program from standard array to container array at this point. 
Some staring tomorrow i would be working on(Release 2) where in 
this release i plan to make the above changes. I have not reached 
my study on container array, so can you help me on how to 
implement the container array for the below code.
Note : I have raised another thread "Container Array" asking the 
same.


string[][] coSizeDirList (string FFs, int SizeDir) {
ulong subdirTotal = 0;
ulong subdirTotalGB;
auto Subdata = appender!(string[][]); Subdata.reserve(100);
auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir).map!(a => tuple(a.name, a.size)).array;

foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth).map!(a => tuple(a.size)).array;
foreach(f; parallel(SdFiles, 1)) { subdirTotal += f.fold!((a, 
b) => a + b); }

subdirTotalGB = (subdirTotal/1024/1024);
	if (subdirTotalGB > SizeDir) { Subdata ~= [d[0], 
to!string(subdirTotalGB)]; }

 subdirTotal = 0;
}
return Subdata.data;
}

Note To All :

I am basically a Admin guy, so started learning D a few months 
ago and found it very interesting, hence i raise so many 
question, so request you to adjust with me for a while.



From,
Vino.B


Re: Performance Issue

2017-09-06 Thread Azi Hassan via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 18:21:44 UTC, Azi Hassan wrote:
I tried to create a similar file structure on my Linux machine. 
Here's the result of ls -R TEST1:


TEST1:
BACKUP
...


Upon further inspection it looks like I messed up the output.

[31460]  - Array 1 for folder 1(all files in Folder 1) of the 
FS C:\\Temp\\TEST1\\BACKUP
[138]  - Array 2 for folder 2(all files in Folder 2) of the 
FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663]  - Array 3 for folder 1(all files in Folder 
1) of the FS C:\\Temp\\TEST2\\EXPOR
[31460] - Array 4 for folder 2(all files in Folder 2) the FS 
C:\\Temp\\TEST2\\EXPORT


What files do these sizes correspond to ? Shouldn't there be two 
elements in the first array because C:\Temp\TEST1\BACKUP\FOLDER1 
contains two files ?


Re: Performance Issue

2017-09-06 Thread Azi Hassan via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 15:11:57 UTC, Vino.B wrote:

On Wednesday, 6 September 2017 at 14:38:39 UTC, Vino.B wrote:
Hi Azi,

  The required out is like below

[31460]  - Array 1 for folder 1(all files in Folder 1) of the 
FS C:\\Temp\\TEST1\\BACKUP
[138]  - Array 2 for folder 2(all files in Folder 2) of the 
FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663]  - Array 3 for folder 1(all files in Folder 
1) of the FS C:\\Temp\\TEST2\\EXPOR
[31460] - Array 4 for folder 2(all files in Folder 2) the FS 
C:\\Temp\\TEST2\\EXPORT


I tried to create a similar file structure on my Linux machine. 
Here's the result of ls -R TEST1:


TEST1:
BACKUP

TEST1/BACKUP:
FOLDER1
FOLDER2

TEST1/BACKUP/FOLDER1:
file1
file2
file3

TEST1/BACKUP/FOLDER2:
b1
b2

And here's the output of ls -R TEST2 :

TEST2:
EXPORT

TEST2/EXPORT:
FOLDER1
FOLDER2

TEST2/EXPORT/FOLDER1:
file2_1
file2_2
file2_3

TEST2/EXPORT/FOLDER2:
export1
export2
export3
export4

This codes output the sizes in the format you described :

import std.algorithm: filter, map, fold, each;
import std.parallelism: parallel;
import std.file: SpanMode, dirEntries, DirEntry;
import std.stdio: writeln;
import std.typecons: tuple;
import std.path: globMatch;
import std.array;

void main () {
auto Filesys = ["TEST1/BACKUP", "TEST2/EXPORT"];
ulong[][] sizes;
foreach(FFs; Filesys)
{
		auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir).map!(a => a.name);

foreach (d; dFiles) {
sizes ~= dirEntries(d, SpanMode.depth).map!(a => 
a.size).array;
}
}
sizes.each!writeln;
}

It outputs the sizes :

[6, 6, 6]
[8, 8]
[8, 8, 8]
[9, 9, 9, 9]

Note that there's no need to store them in ulong[][] sizes, you 
can display them inside the loop by replacing `sizes ~= 
dirEntries(d, SpanMode.depth).map!(a => a.size).array;` with 
`dirEntries(d, SpanMode.depth).map!(a => a.size).joiner(", 
").writeln;`


To make sure that it calculates the correct sizes, I made it 
display the paths instead by making "sizes" string[][] instead of 
ulong[][] and by replacing map!(a => a.size) with map!(a => 
a.name) in the second foreach loop :


import std.algorithm: filter, map, each;
import std.file: SpanMode, dirEntries, DirEntry;
import std.stdio: writeln;
import std.array : array;

void main () {
auto Filesys = ["TEST1/BACKUP", "TEST2/EXPORT"];
string[][] sizes;
foreach(FFs; Filesys)
{
		auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir).map!(a => a.name);

foreach (d; dFiles) {
sizes ~= dirEntries(d, SpanMode.depth).map!(a => 
a.name).array;
}
}
sizes.each!writeln;
}

It outputs the paths as expected :

["TEST1/BACKUP/FOLDER1/file1", "TEST1/BACKUP/FOLDER1/file2", 
"TEST1/BACKUP/FOLDER1/file3"]

["TEST1/BACKUP/FOLDER2/b1", "TEST1/BACKUP/FOLDER2/b2"]
["TEST2/EXPORT/FOLDER1/file2_3", "TEST2/EXPORT/FOLDER1/file2_1", 
"TEST2/EXPORT/FOLDER1/file2_2"]
["TEST2/EXPORT/FOLDER2/export2", "TEST2/EXPORT/FOLDER2/export3", 
"TEST2/EXPORT/FOLDER2/export1", "TEST2/EXPORT/FOLDER2/export4"]


Re: Performance Issue

2017-09-06 Thread Vino.B via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 14:38:39 UTC, Vino.B wrote:
On Wednesday, 6 September 2017 at 10:58:25 UTC, Azi Hassan 
wrote:

[...]


Hi Azi,

  Your are correct, i tried to implement the fold in a separate 
small program as below, but not able to get the the required 
output, when you execute the below program the output you get 
is as below


Output:
[31460]
[31460, 138]
[31460, 138, 2277663]
[31460, 138, 2277663, 2277663]
[31460, 138, 2277663, 2277663, 31460]

Setup:

C:\\Temp\\TEST1\\BACKUP : This has 2 folder and 2 files in each 
folder
C:\\Temp\\TEST2\\EXPORT : This has 2 folder and 2 files in one 
folder and  1 file in another folder


Total files : 5

Required output:
[31460, 138] - Array 1 for the FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663, 31460] - Array 2 for the 
C:\\Temp\\TEST2\\EXPORT


import std.algorithm: filter, map, fold;
import std.parallelism: parallel;
import std.file: SpanMode, dirEntries, isDir;
import std.stdio: writeln;
import std.typecons: tuple;
import std.path: globMatch;
import std.array;
void main () {
ulong[] Alternate;
	string[] Filesys = ["C:\\Temp\\TEST1\\BACKUP", 
"C:\\Temp\\TEST2\\EXPORT"];

foreach(FFs; Filesys)
{
			auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir).map!(a => tuple(a.name, a.size)).array;

foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth).map!(a => tuple(a.size)).array;

foreach (f; 
parallel(SdFiles,1)) {
Alternate ~= 
f[0]; writeln(Alternate);

}
}
}
}

From,
Vino.B


Hi Azi,

  The required out is like below

[31460]  - Array 1 for folder 1(all files in Folder 1) of the FS 
C:\\Temp\\TEST1\\BACKUP
[138]  - Array 2 for folder 2(all files in Folder 2) of the 
FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663]  - Array 3 for folder 1(all files in Folder 1) 
of the FS C:\\Temp\\TEST2\\EXPOR
[31460] - Array 4 for folder 2(all files in Folder 2) the FS 
C:\\Temp\\TEST2\\EXPORT


Re: Performance Issue

2017-09-06 Thread Vino.B via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 10:58:25 UTC, Azi Hassan wrote:

On Wednesday, 6 September 2017 at 08:10:35 UTC, Vino.B wrote:
in the next line of the code i say to list only folders that 
are greater than 10 Mb but this now is listing all folder 
(folder whose size is less than 10 MB are getting listed, not 
sure why.


Is the size in GB ? If so, then subdirTotalGB = 
(subdirTotal/1024/1024); needs to become subdirTotalGB = 
(subdirTotal/1024/1024/1024); for it to take effect. But do 
correct me if I'm wrong, I still haven't had my morning coffee.


Hi Azi,

  Your are correct, i tried to implement the fold in a separate 
small program as below, but not able to get the the required 
output, when you execute the below program the output you get is 
as below


Output:
[31460]
[31460, 138]
[31460, 138, 2277663]
[31460, 138, 2277663, 2277663]
[31460, 138, 2277663, 2277663, 31460]

Setup:

C:\\Temp\\TEST1\\BACKUP : This has 2 folder and 2 files in each 
folder
C:\\Temp\\TEST2\\EXPORT : This has 2 folder and 2 files in one 
folder and  1 file in another folder


Total files : 5

Required output:
[31460, 138] - Array 1 for the FS C:\\Temp\\TEST1\\BACKUP
[2277663, 2277663, 31460] - Array 2 for the 
C:\\Temp\\TEST2\\EXPORT


import std.algorithm: filter, map, fold;
import std.parallelism: parallel;
import std.file: SpanMode, dirEntries, isDir;
import std.stdio: writeln;
import std.typecons: tuple;
import std.path: globMatch;
import std.array;
void main () {
ulong[] Alternate;
	string[] Filesys = ["C:\\Temp\\TEST1\\BACKUP", 
"C:\\Temp\\TEST2\\EXPORT"];

foreach(FFs; Filesys)
{
			auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir).map!(a => tuple(a.name, a.size)).array;

foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth).map!(a => tuple(a.size)).array;

foreach (f; 
parallel(SdFiles,1)) {
Alternate ~= 
f[0]; writeln(Alternate);

}
}
}
}

From,
Vino.B


Re: Performance Issue

2017-09-06 Thread Azi Hassan via Digitalmars-d-learn

On Wednesday, 6 September 2017 at 08:10:35 UTC, Vino.B wrote:
in the next line of the code i say to list only folders that 
are greater than 10 Mb but this now is listing all folder 
(folder whose size is less than 10 MB are getting listed, not 
sure why.


Is the size in GB ? If so, then subdirTotalGB = 
(subdirTotal/1024/1024); needs to become subdirTotalGB = 
(subdirTotal/1024/1024/1024); for it to take effect. But do 
correct me if I'm wrong, I still haven't had my morning coffee.


Re: Performance Issue

2017-09-06 Thread user1234 via Digitalmars-d-learn

On Tuesday, 5 September 2017 at 09:44:09 UTC, Vino.B wrote:

Hi,

 The below code is consume more memory and slower can you 
provide your suggestion on how to over come these issues.


string[][] csizeDirList (string FFs, int SizeDir) {
ulong subdirTotal = 0;
ulong subdirTotalGB;
auto Subdata = appender!(string[][]);
auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a 
=> a.isDir && !globMatch(a.baseName, "*DND*")).map!(a => 
tuple(a.name, a.size)).array;

  foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth).map!(a => tuple(a.size)).array;

foreach (f; parallel(SdFiles,1))
{ subdirTotal += f[0]; }
subdirTotalGB = 
(subdirTotal/1024/1024);
		if (subdirTotalGB > SizeDir) { Subdata ~= [d[0], 
to!string(subdirTotalGB)]; }

subdirTotal = 0;
}
return Subdata.data;
}

From,
Vino.B


Try to suppress the globMatch. according to the glob, just a 
ctRegex would do the job or even more simple `!a.canFind("DND")`.


Re: Performance Issue

2017-09-06 Thread Vino.B via Digitalmars-d-learn

On Tuesday, 5 September 2017 at 10:28:28 UTC, Stefan Koch wrote:

On Tuesday, 5 September 2017 at 09:44:09 UTC, Vino.B wrote:

Hi,

 The below code is consume more memory and slower can you 
provide your suggestion on how to over come these issues.


[...]


Much slower then ?


Hi,

  This code is used to get the size of folders on a NetApp NAS 
Filesystem , so the NetApp have their own tool to perform such 
task which is faster than this code, the difference is about 
15-20 mins. While going through this website i was able to findd 
that we can use the "fold" from std.algorithm.iteration which 
would be faster that use the normal "+=", so tried replacing the 
line "{ subdirTotal += f[0]; }" with { subdirTotal = f[0].fold!( 
(a, b) => a + b); }, and this produces the required output+ 
additional output , in the next line of the code i say to list 
only folders that are greater than 10 Mb but this now is listing 
all folder (folder whose size is less than 10 MB are getting 
listed, not sure why.


Program:
string[][] coSizeDirList (string FFs, int SizeDir) {
ulong subdirTotal = 0;
ulong subdirTotalGB;
auto Subdata = appender!(string[][]); Subdata.reserve(100);
auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isDir && !globMatch(a.baseName, "*DND*")).map!(a => 
tuple(a.name, a.size)).array;

  foreach (d; dFiles) {
auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth).map!(a => tuple(a.size)).array;

foreach (f; parallel(SdFiles,1))
{ subdirTotal = f[0].fold!( (a, b) => a 
+ b); }
subdirTotalGB = 
(subdirTotal/1024/1024);
		if (subdirTotalGB > SizeDir) { Subdata ~= [d[0], 
to!string(subdirTotalGB)]; }

subdirTotal = 0;
}
return Subdata.data;
}

OutPut
C:\Temp\TEAM1\dir1 - > Sieze greater than 10MB
C:\Temp\TEAM1\dir2  -> Size lesser than 10MB.

From,
Vino.B


Re: Performance Issue

2017-09-05 Thread Azi Hassan via Digitalmars-d-learn

On Tuesday, 5 September 2017 at 09:44:09 UTC, Vino.B wrote:

Hi,

 The below code is consume more memory and slower can you 
provide your suggestion on how to over come these issues.


You can start by dropping the .array conversions after 
dirEntries. That way your algorithm will become lazy (as opposed 
to eager), meaning that it won't allocate an entire array of 
DirEntry[]. It will, instead, treat the DirEntries one at a time, 
resulting in less memory consumption.


I didn't understand the join(["?\\", d[0]]) part, maybe you 
meant to write join("?\\", d[0]) ?


If appender is too slow, you can experiment with a dynamic array 
whose capacity was preallocated : string[][] Subdata; 
Subdata.reserve(1); In this case Subdata will hold enough 
space for 1 string[]s, which will result in better 
performance.


Here's the updated code (sans .array) in case any one wants to 
reproduce the issue :


import std.stdio;
import std.conv;
import std.typecons;
import std.array;
import std.path;
import std.container;
import std.file;
import std.parallelism;
import std.algorithm;

void main()
{
".".csizeDirList(1024).each!writeln;
}

string[][] csizeDirList (string FFs, int SizeDir) {
ulong subdirTotal = 0;
ulong subdirTotalGB;
auto Subdata = appender!(string[][]);
auto dFiles = dirEntries(FFs, SpanMode.shallow)
.filter!(a => a.isDir && !globMatch(a.baseName, "*DND*"))
.map!(a => tuple(a.name, a.size));

foreach (d; dFiles) {
		auto SdFiles = dirEntries(join(["?\\", d[0]]), 
SpanMode.depth)

.map!(a => tuple(a.size));
foreach (f; parallel(SdFiles,1)) {
subdirTotal += f[0];
}
subdirTotalGB = (subdirTotal/1024/1024);
if (subdirTotalGB > SizeDir) {
Subdata ~= [d[0], to!string(subdirTotalGB)];
}
subdirTotal = 0;
}
return Subdata.data;
}


Re: Performance Issue

2017-09-05 Thread Stefan Koch via Digitalmars-d-learn

On Tuesday, 5 September 2017 at 09:44:09 UTC, Vino.B wrote:

Hi,

 The below code is consume more memory and slower can you 
provide your suggestion on how to over come these issues.


[...]


Much slower then ?


Re: Performance issue with GC

2016-09-07 Thread Yuxuan Shui via Digitalmars-d-learn

On Wednesday, 7 September 2016 at 22:54:14 UTC, Basile B. wrote:
On Wednesday, 7 September 2016 at 21:20:30 UTC, Yuxuan Shui 
wrote:
I have a little data processing program which makes heavy use 
of associative arrays, and GC almost doubles the runtime of it 
(~2m with GC disabled -> ~4m).


I just want to ask what's the best practice in this situation? 
Do I just use GC.disable and manually run GC.collect 
periodically?


I'd say yes.

Another option: https://github.com/economicmodeling/containers. 
The HashMap will give you a full control on the mem allocs.


This is a really nice library! Thanks a lot.


Re: Performance issue with GC

2016-09-07 Thread Basile B. via Digitalmars-d-learn

On Wednesday, 7 September 2016 at 21:20:30 UTC, Yuxuan Shui wrote:
I have a little data processing program which makes heavy use 
of associative arrays, and GC almost doubles the runtime of it 
(~2m with GC disabled -> ~4m).


I just want to ask what's the best practice in this situation? 
Do I just use GC.disable and manually run GC.collect 
periodically?


I'd say yes.

Another option: https://github.com/economicmodeling/containers. 
The HashMap will give you a full control on the mem allocs.