Re: Display a random image with vibe.d

2021-06-21 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-06-20 17:14, vnr wrote:

On Sunday, 20 June 2021 at 14:28:26 UTC, jfondren wrote:

On Sunday, 20 June 2021 at 13:58:22 UTC, vnr wrote:


Thanks for the answers, I understand better what is going on.

So, what should I do to make my server respond with a random image, 
and not the random image page? I'm fairly new to vibe.d, so I don't 
yet know the intricacies of how to handle this style of thing and I 
couldn't find how to do it in the documentation.


Thank you.


Responding with a random image is an option, but what you have is
a good start, you just need to also serve images on request.

I'm very distracted right now or I would've included more in my
earlier post, but Vibe's online documentation has what you need.

Try, especially, https://vibed.org/docs#http-routing

You want a route for the random-image page, and you want to serve
static files under images/


Great, thanks a lot, it works as expected! Here is the code used 
(app.d), for those who are interested:


```
import vibe.vibe;

void main()
{

     auto router = new URLRouter;
     router.get("/", );
     router.get("*", serveStaticFiles("public"));

     auto settings = new HTTPServerSettings;
     settings.bindAddresses = ["::1", "127.0.0.1"];
     settings.port = 8080;

     auto listener = listenHTTP(settings, router);
     scope (exit) listener.stopListening();

     logInfo("Please open http://127.0.0.1:8080/ in your browser.");
     runApplication();
     }

     /// The index page
     void index(HTTPServerRequest req, HTTPServerResponse res)
     {
     import std.random;
     import std.format;

     auto rnd = Random(unpredictableSeed);
     const auto rndimg = format("/images/rndimg/img%d.jpg", 
uniform(1, 27, rnd));


     res.render!("index.dt", req, rndimg);
     }
     ```
The Random object should only be created once, but here its created for 
every request.

1. It is relatively slow to reinitialize it
2. If you really want to have uniform distribution its probably better 
to not throw the UniformRandomNumberGenerator generator away.


You can simplify that by using just uniform(1, 27) (it should fallback 
to the default rndGen that is initialized per thread 
(https://dlang.org/library/std/random/rnd_gen.html).


here a benchmark of both functions:

```
import std;
import std.datetime.stopwatch;

auto onlyUniform() {
  return uniform(1, 27);
}
auto withNewRandom() {
auto rnd = Random(unpredictableSeed);
return uniform(1, 27, rnd);
}
void main() {
  100_000.benchmark!(onlyUniform, withNewRandom).writeln;
}
```


Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-31 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-31 18:50, Christian Köstlin wrote:

On 2021-05-31 13:40, CandG wrote:

On Thursday, 27 May 2021 at 14:44:29 UTC, Steven Schveighoffer wrote:

On 5/27/21 10:13 AM, Christian Köstlin wrote:
P.S.: I still do not get how to post formatted snippets with 
thunderbird to the newsgroup/forum :/


It's not possible currently.


I no longer use thunderbird, but:
  - 
https://github.com/CyberShadow/DFeed/commit/2e60edab2aedd173c7ea3712cb9500d90d4b795d#diff-0ecfc518dcbf670fdac54985dd56663a16a0806fd57a05ac09bf40a933b851e5R338 

  - IIRC thunderbird allows changing headers: try adding 
"Content-Type" to the comma-separated list "mail.compose.other.header"
  - Then in the composition window make sure Content-Type is set to 
something like "text/plain; markup=markdown"

Thanks for the tip, lets see if it works:

```D
void main(string[] args) {
   writeln("Hello World");
}
```

Kind regards,
Christian



another try.

```D
void main(string[] args) {
   writeln("Hello World");
}
```



Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-31 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-31 13:40, CandG wrote:

On Thursday, 27 May 2021 at 14:44:29 UTC, Steven Schveighoffer wrote:

On 5/27/21 10:13 AM, Christian Köstlin wrote:
P.S.: I still do not get how to post formatted snippets with 
thunderbird to the newsgroup/forum :/


It's not possible currently.


I no longer use thunderbird, but:
  - 
https://github.com/CyberShadow/DFeed/commit/2e60edab2aedd173c7ea3712cb9500d90d4b795d#diff-0ecfc518dcbf670fdac54985dd56663a16a0806fd57a05ac09bf40a933b851e5R338 

  - IIRC thunderbird allows changing headers: try adding "Content-Type" 
to the comma-separated list "mail.compose.other.header"
  - Then in the composition window make sure Content-Type is set to 
something like "text/plain; markup=markdown"

Thanks for the tip, lets see if it works:

```D
void main(string[] args) {
  writeln("Hello World");
}
```

Kind regards,
Christian



Re: where do I find the complete phobos function list names ?

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-26 01:46, Paul Backus wrote:

On Tuesday, 25 May 2021 at 22:05:16 UTC, someone wrote:
I was unsuccessfully searching the site for them in the form of a 
master index to begin with.


I need them, in plain text, in order to add them to a VIM custom 
syntax highlight plugin I already made which I am already using but is 
lacking phobos support.


Can anyone point me to the right place please ?


There is no global index in the online documentation; the best you can 
get is an index of each module.


If you really want this, your best bet is probably to run a source-code 
indexer like `ctags` on the Phobos source tree, and do some scripting to 
transform the results into something usable in your Vim plugin.
e.g. I found this file https://dlang.org/library/symbols.js which is 
used e.g. by https://dlang.org/library/std/algorithm/sorting/sort.html 
to implement the search. Perhaps that helps.


Kind regards,
Christian



Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-27 18:56, Ali Çehreli wrote:

On 5/27/21 9:19 AM, Ali Çehreli wrote:


   auto result = new string[users.length];
   users.enumerate.parallel.each!(en => result[en.index] = 
servers.doSomething(en.value));

   writeln(result);


I still like the foreach version more:

     auto result = new string[users.length];
     foreach (i, user; users.parallel) {
   result[i] = servers.doSomething(user);
     }
     writeln(result);

Ali


Hi Ali,

both of those variants do work for me, thanks a lot!
Still not sure which I prefer (almost too many options now :) ).

I am so happy that I asked in this forum, help is much appreciated!

Christian


Re: where do I find the complete phobos function list names ?

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-26 01:46, Paul Backus wrote:

On Tuesday, 25 May 2021 at 22:05:16 UTC, someone wrote:
I was unsuccessfully searching the site for them in the form of a 
master index to begin with.


I need them, in plain text, in order to add them to a VIM custom 
syntax highlight plugin I already made which I am already using but is 
lacking phobos support.


Can anyone point me to the right place please ?


There is no global index in the online documentation; the best you can 
get is an index of each module.


If you really want this, your best bet is probably to run a source-code 
indexer like `ctags` on the Phobos source tree, and do some scripting to 
transform the results into something usable in your Vim plugin.
Where is the index for the search functionality on dlang.org located? 
Could that be used?


Kind regards,
Christian


Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-27 15:00, sighoya wrote:

On Thursday, 27 May 2021 at 12:58:28 UTC, Christian Köstlin wrote:

That looks nice, but unfortunately my data for servers and users in 
the real world is not static but comes from a config file.


Okay, but then parametrizing the static lambda with runtime parameters 
should work. The important fact is that the closure needs to be static.

Ah thanks, now I understand.
So what I came up with now is a combination of the things mentioned:

```D
import std;

string doSomething(string[] servers, string user) {
return user ~ servers[0];
}
struct UserWithServers {
string user;
string[] servers;
}
void main(string[] args) {
auto servers = args;
auto users = ["u1", "u2", "u3"];
auto usersWithServers = users.map!(user => UserWithServers(user, 
servers)).array;


static fn = function(UserWithServers user) => 
user.servers.doSomething(user.user);

writeln(taskPool.amap!(fn)(usersWithServers));
}
```

Making also the example a little bit more "realistic" by using dynamic 
data for servers.


I would like to use auto fn, but somehow saying that its a function is 
not enough for dmd. From my understanding a function would never need a 
context?!?


Thanks a lot!
Christian

P.S.: I still do not get how to post formatted snippets with thunderbird 
to the newsgroup/forum :/


Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-27 14:48, sighoya wrote:

On Thursday, 27 May 2021 at 12:17:36 UTC, Christian Köstlin wrote:
Can you explain me, where here a double context is needed? Because all 
data now should be passed as arguments to amap?


Kind regards,
Christian


I  believe D's type system isn't smart enough to see independence 
between context and closure, otherwise your original example would also 
work as users and servers are context independent.


What about:

```D
string doSomething(string[] servers, string user) {
     return user ~ servers[0];
}
void main() {
     static servers = ["s1", "s2", "s3"];
     static users = ["u1", "u2", "u3"];
     static lambda = (string user) => servers.doSomething(user);
     writeln(map!(user => servers.doSomething(user))(users));
     writeln(taskPool.amap!(lambda)(users));
}
```

That looks nice, but unfortunately my data for servers and users in the 
real world is not static but comes from a config file.


Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-27 13:11, sighoya wrote:

On Thursday, 27 May 2021 at 09:58:40 UTC, Christian Köstlin wrote:

I have this small program here

test.d:
```
import std;
string doSomething(string[] servers, string user) {
    return user ~ servers[0];
}
void main() {
    auto servers = ["s1", "s2", "s3"];
    auto users = ["u1", "u2", "u3"];
    writeln(map!(user => servers.doSomething(user))(users));
    writeln(taskPool.amap!(user => servers.doSomething(user))(users));
}
```




I think it relates to https://issues.dlang.org/show_bug.cgi?id=5710

The reason is that amap requires a this pointer of type TaskPool and a 
context pointer to the closure which belongs to main, at least because 
it requires servers. Having both isn't possible due to problems in non 
DMD compilers.


If you rewrite it more statically:
```D
string doSomething(string[] servers, string user) {
     return user ~ servers[0];
}

string closure(string user)
{
     return servers.doSomething(user);
}
auto servers = ["s1", "s2", "s3"];
int main()
{
     auto users = ["u1", "u2", "u3"];
     writeln(map!(user => servers.doSomething(user))(users));
     writeln(taskPool.amap!(closure)(users));
     return 0;
}
```

PS: Just enable markdown if you want to highlight D code
On a second not I needed to make server __gshared in my real program, as 
otherwise its a thread local variable (in the small demo program, this 
did not occur, I guess because the parallel operations we're too fast).


Kind regards,
Christian


Re: How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn
Thanks for the proposed solution. It also works in my slightly bigger 
program (although I do not like to make servers more global).


I tried also the following (which unfortunately also does not work as 
intended):


```D
import std;
string doSomething(string[] servers, string user) {
return user ~ servers[0];
}

int main()
{
auto users = ["u1", "u2", "u3"];
auto servers = ["s1", "s2", "s3"];
auto usersWithServers = users.map!(user => tuple!("user", 
"servers")(user, servers)).array;
writeln(map!(userWithServers => 
userWithServers.servers.doSomething(userWithServers.user))(usersWithServers));
writeln(taskPool.amap!(userWithServers => 
userWithServers.servers.doSomething(userWithServers.user))(usersWithServers));

return 0;
}
```

Here I try to put the data I need together into one tuple ("manually") 
and then pass it all to amap. Can you explain me, where here a double 
context is needed? Because all data now should be passed as arguments to 
amap?


Kind regards,
Christian


How to work around the infamous dual-context when using delegates together with std.parallelism

2021-05-27 Thread Christian Köstlin via Digitalmars-d-learn

I have this small program here

test.d:
```
import std;
string doSomething(string[] servers, string user) {
return user ~ servers[0];
}
void main() {
auto servers = ["s1", "s2", "s3"];
auto users = ["u1", "u2", "u3"];
writeln(map!(user => servers.doSomething(user))(users));
writeln(taskPool.amap!(user => servers.doSomething(user))(users));
}
```

The first map just works as expected, for the parallel amap though fromo 
(https://dlang.org/phobos/std_parallelism.html) I get the following 
warning with dmd:


```
/Users/.../dlang/dmd-2.096.1/osx/bin/../../src/phobos/std/parallelism.d(1711): 
Deprecation: function `test.main.amap!(string[]).amap` function requires 
a dual-context, which is deprecated

```

for ldc the build fails with:
```
/Users/.../dlang/ldc-1.26.0/bin/../import/std/parallelism.d(1711): 
Deprecation: function `test.main.amap!(string[]).amap` function requires 
a dual-context, which is deprecated

test.d(9):instantiated from here: `amap!(string[])`
/Users/.../dlang/ldc-1.26.0/bin/../import/std/parallelism.d(1711): 
Error: function `test.main.amap!(string[]).amap` requires a 
dual-context, which is not yet supported by LDC

```


Thanks in advance for you insights,
Christian


Re: how do I implement opSlice for retro range?

2021-05-14 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-14 05:49, Jack wrote:
How can I implement ranges in the retro range? I'd like to do this 
without allocate a new array with .array from std.array, can I do that?


use like this:

```d
     auto arr = [1, 2, 3, 4, 5];
     auto a = new A!int(arr);
     auto b = a.retro[0 .. 2]; // 4, 5
```

the class:

```d

class A(T)
{
     private T[] arr;

     this(T[] a)
     {
     arr = a;
     }

     auto opIndex() nothrow
     {
     return Range(arr);
     }

     auto retro() { return RangeRetro(arr); }

     protected static struct Range
     {
     T[] a;
     T front() { return a[0]; }
     T back() { return a[$ - 1]; }
     void popFront() { a = a[1 .. $]; }
     bool empty() { return a.length == 0; }
     }

     protected static struct RangeRetro
     {
     import std.range : popFront;
     import std.range : popBack;

     T[] a;
     T front() { return a[$ - 1]; }
     T back() { return a[0]; }
     void popBack() {  a.popFront(); }
     void popFront() { a.popBack(); }
     bool empty() { return a.length == 0; }

     auto opSlice(size_t start, size_t end)
     {
    ???
     }
     }
}
```



arr.retro()[0..2] already works.

see https://run.dlang.io/is/U8u3br



Re: How to use dub with our own package

2021-05-12 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-12 21:22, Vinod K Chandran wrote:

On Wednesday, 12 May 2021 at 18:26:39 UTC, Christian Köstlin wrote:


Are you really interested in doing winglib as a separate dub package?
If not you could just do a `dub init yourappname` which gives you the 
basic skeleton. something like:


.
├── dub.sdl
└── source
    └── app.d


then you replace app.d with your file + put your winglib with 
package.d into a subfolder under source ->


.
├── dub.sdl
└── source
    ├── app.d
    └── winglib
    ├── othermodule.d
    └── package.d


then a dub build will just build your project including the submodules 
...


if you need a separate dub package then the other answers lead the way.
https://dub.pm/commandline.html#add-local


That's really helpful. All i need to do is re-arrange my folder setup. 
Thanks a lot. :)


if you want to do a separate package later on, you only have to change a 
little in your project setup, code can stay the same.


kind regards,
Christian



Re: How to use dub with our own package

2021-05-12 Thread Christian Köstlin via Digitalmars-d-learn

On 2021-05-12 15:37, Vinod K Chandran wrote:

Hi all,
I am creating a hobby project related with win api gui functions. i 
would like to work with dub. But How do I use dub in my project.

1. All my gui library modules are located in a folder named "winglib".
2. And that folder also conatains a d file called "package.d"
3. "package.d" contains all the public imports.
4. Outside this winglib folder, I have my main file called "app.d"
5. "app.d" imports "winglib".
So in this setup, how do I use dub ? Thanks in advance.

Are you really interested in doing winglib as a separate dub package?
If not you could just do a `dub init yourappname` which gives you the 
basic skeleton. something like:


.
├── dub.sdl
└── source
└── app.d


then you replace app.d with your file + put your winglib with package.d 
into a subfolder under source ->


.
├── dub.sdl
└── source
├── app.d
└── winglib
├── othermodule.d
└── package.d


then a dub build will just build your project including the submodules ...

if you need a separate dub package then the other answers lead the way.
https://dub.pm/commandline.html#add-local


Re: serve-d and emacs

2021-04-30 Thread Christian Köstlin via Digitalmars-d-learn

On 26.04.21 21:13, WebFreak001 wrote:

On Monday, 26 April 2021 at 18:45:08 UTC, Christian Köstlin wrote:

Does anybody use serve-d with emacs (lsp-mode or eglot)?
I would love to see the configuration!

Kind regards,
Christian


if you configure it yourself, feel free to share the configuration and 
maybe PR it to serve-d repo.


Basic setup should be quite easy, see vim for reference: 
https://github.com/Pure-D/serve-d/blob/master/editor-vim.md

I threw together a "minimal" emacs configuration that can be used
if you have just a plain emacs and dlang installation.
See https://github.com/gizmomogwai/demacs

Kind regards,
Christian


Re: serve-d and emacs

2021-04-28 Thread Christian Köstlin via Digitalmars-d-learn

On 26.04.21 21:13, WebFreak001 wrote:

On Monday, 26 April 2021 at 18:45:08 UTC, Christian Köstlin wrote:

Does anybody use serve-d with emacs (lsp-mode or eglot)?
I would love to see the configuration!

Kind regards,
Christian


if you configure it yourself, feel free to share the configuration and 
maybe PR it to serve-d repo.


Basic setup should be quite easy, see vim for reference: 
https://github.com/Pure-D/serve-d/blob/master/editor-vim.md

I finally got it working for me.
Its a little tricky, because the basic setup works e.g. with emacs 27.2 
or newer, but not with 27.1.
All that is needed (if you have the right emacs version and use straight 
for installing packages) is:


(use-package d-mode
  :straight t)

(use-package eglot
  :straight t
  :init (progn
  (add-hook 'd-mode-hook 'eglot-ensure)
  ))
(add-to-list
   'eglot-server-programs
   '(d-mode . ("PATH_TO_SERVE_D/serve-d")))


With a plain emacs installation the following should work:

(require 'package)
(add-to-list 'package-archives '("melpa" . 
"https://melpa.org/packages/;) t)

(package-initialize)
(package-refresh-contents)
(package-install 'project)
(package-install 'd-mode)
(package-install 'eglot)
(require 'project)
(require 'd-mode)
(require 'eglot)

(add-to-list
   'eglot-server-programs
   '(d-mode . ("FULL_PATH_TO_SERVE_D")))
(add-hook 'd-mode-hook 'eglot-ensure)

(One emacs restart might be necessary, as there is a conflict of version 
for the dependency "project".


Kind regards,
Christian



serve-d and emacs

2021-04-26 Thread Christian Köstlin via Digitalmars-d-learn

Does anybody use serve-d with emacs (lsp-mode or eglot)?
I would love to see the configuration!

Kind regards,
Christian


Re: Anything in D to avoid check for null everywhere?

2021-01-14 Thread Christian Köstlin via Digitalmars-d-learn

On 12.01.21 22:37, Jack wrote:
I was looking for a way to avoid null checks everywhere. I was checking 
the Null object pattern, or use something like enforce pattern, or even 
if I could make a new operator and implement something like C#'s .? 
operator, that Java was going to have one but they refused[1] (doesn't 
behave exactly as C#'s actually), Kotlin also got something in this area[2]


What some D ways to avoid those checks?

[1]: 
https://mail.openjdk.java.net/pipermail/coin-dev/2009-March/47.html

[2]: https://kotlinlang.org/docs/reference/null-safety.html#safe-calls

Did you have a look at https://code.dlang.org/packages/optional?
Especially https://aliak00.github.io/optional/optional/oc/oc.html might 
go in the right direction.



Kind regards,
Christian


Re: DMD support for Apples new silicon

2021-01-11 Thread Christian Köstlin via Digitalmars-d-learn

On 10.01.21 17:29, Guillaume Piolat wrote:

On Sunday, 10 January 2021 at 16:03:53 UTC, Christian Köstlin wrote:


Good news!
I was hoping for support in ldc, but dmds super fast compile times 
would be very welcome. I guess it's more work to put an ARM backend 
there.


Kind regards,
Christian


It is indeed more work and up to the DMD leadership what should happen.

You can already switch between compilers with:
   dub --compiler dmd
   dub --compiler ldc2

so as to benefit from dmd fast build times, and then release with ldc.

Apple Silicon and Rosetta 2 are really quite fast, so you should 
experience pretty quick build times there anyway.
I do not have new Apple HW, but knowing that dlang is covered is a good 
thing!


thanks a lot!

christian



Re: DMD support for Apples new silicon

2021-01-10 Thread Christian Köstlin via Digitalmars-d-learn

On 10.01.21 15:50, Guillaume Piolat wrote:

On Sunday, 10 January 2021 at 14:22:25 UTC, Christian Köstlin wrote:

Hi all,

are there any plans on supporting Apples new ARM silicon with DMD or 
would this be something for ldc?


Kind regards,
Christian


Hello Christian,

LDC since 1.24+ support cross-compiling to Apple Silicon.
Here is how to build for it on Big Sur.


1. Download ldc2-1.24.0-osx-x86_64.tar.xz (or later version)
    from this page: https://github.com/ldc-developers/ldc/releases

2. Unzip where you want, and put the bin/ subdirectory in your PATH envvar

    This will give you the ldc2 and dub command in your command-line, 
however they won't work straight away in Catalina/Big Sur because of 
lacking notarization.


3. (optional) In this case, in Finder, right-click + click "Open" on the 
bin/dub and bin/ldc2 binaries since it is not notarized software, and 
macOS will ask for your approval first. Once you've done that, dub and 
ldc2 can be used from your Terminal normally.


4. Type 'ld' in Terminal, this will install the necessary latest 
XCode.app if it isn't already. That is a painful 10 gb download in 
general. You can also install Xcode from the App Store. People target 
Big Sur arm64 from Catalina or Big Sur usually.


5. You can target normal x86_64 (Rosetta 2) with:

   ldc2 
   dub 

6. If you want to target arm64, adapt the SDK path in etc/ldc2.conf with 
your actual Xcode macOS11.0 path, and then use 
-mtriple=arm64-apple-macos to cross-compile.


   ldc2 -mtriple=arm64-apple-macos 
   dub -a arm64-apple-macos 

Debugging and notarization is a whole another topic then.


Good news!
I was hoping for support in ldc, but dmds super fast compile times would 
be very welcome. I guess it's more work to put an ARM backend there.


Kind regards,
Christian


DMD support for Apples new silicon

2021-01-10 Thread Christian Köstlin via Digitalmars-d-learn

Hi all,

are there any plans on supporting Apples new ARM silicon with DMD or 
would this be something for ldc?


Kind regards,
Christian


Re: dynamic array .length vs .reserve - what's the difference?

2020-08-02 Thread Christian Köstlin via Digitalmars-d-learn

On 31.07.20 06:28, Ali Çehreli wrote:

On 7/30/20 4:42 PM, wjoe wrote:

 > So .capacity can't be assigned a value like length to reserve the RAM ?

Yes, a read-only property...

 >> auto a = b;
 >> b = b[0 .. $-1];
 >> b ~= someT;
 >>
 >> If that last line is done in-place, then it overwrites a[$-1].
 >
 > So this is a case of sharing being terminated ?

Yes but the "sharing being terminated" phrase was my attempt at 
explaining things, which did not catch on. :)


 > Expired structs are put back into (appended to) the array for reuse.
 > When the length of the array == 0, upon releasing a struct, this array
 > is reallocated which isn't supposed to happen. It should just grow like
 > it did with length > 1.
 > assumeSafeAppend should accomplish that :)

Yes, assumeSafeAppend is exactly for cases like that and it helps.

Another option, which is curiously said to be more performant in memory 
allocation than native arrays, is std.array.Appender. I've used 
function-local static Appenders to cut down on memory allocation. Here 
is an uncompiled pseudo code:


void foo() {
   static Appender!int a;
   a.clear();  // <- Clear state from last execution of this function.
   //    'a' still holds on to its memory.

   while (someCondition()) {
     a ~= 42;
   }

   // Use 'a' here
}

So, 'a' will have the longest length ever used up to this point, which 
may be exactly what is desired.


The cool thing is, because data is thread-local by-default in D, every 
thread gets their own copy of 'a', so there is not danger of data race. 
:) (Warning: Don't call foo() recursively though. ;) )


Ali


That's a trick I need to remember!

Thank Ali!


Vibe.d and shared data synchronization

2020-04-07 Thread Christian Köstlin via Digitalmars-d-learn

Hi,

I wrote a very small vibe.d based URL-shortener.
It has an in memory database that is in theory shared across request 
threads. At the moment I do not distribute over the vibe.d threadpool 
(https://vibed.org/features#multi-threading), but I would like to.


What would be the best way to share this database?
Is there something like a many readers, one writer implementation for dlang?
Is it somehow enforced by vibe.d so that only shared objects can be shared?

The code is located at https://github.com/gizmomogwai/shortened


Kind regards,
Christian


Re: How to get type returned by e.g. std.algorithm.iteration.filter

2019-05-19 Thread Christian Köstlin via Digitalmars-d-learn

Last version using more from the outer template

#!/usr/bin/env rdmd
import std.stdio;
import std.algorithm;
import std.typecons;
import std.array;
import std.range;
import std.traits;

auto byMinimum(Ranges)(Ranges ranges)
{
auto getNonEmpty()
{
return ranges.filter!("!a.empty");
}

auto nonEmpty = getNonEmpty;

auto minimumOfRanges()
{
// dfmt off
return nonEmpty
.map!(range => tuple!("range", "line")(range, range.front))
.minElement!("a.line");
// dfmt on
}

ReturnType!(minimumOfRanges) minRangeAndLine;
struct ByMinimum
{
bool empty()
{
return nonEmpty.empty;
}

auto front()
{
minRangeAndLine = minimumOfRanges;
return minRangeAndLine;
}

void popFront()
{
nonEmpty = getNonEmpty;
minRangeAndLine.range.popFront;
}
}

return ByMinimum();
}

void main(string[] files)
{
foreach (n; files[1 .. $].map!(name => 
File(name).byLine(No.keepTerminator)).array.byMinimum)

{
writeln(n.line);
}
}



Re: How to get type returned by e.g. std.algorithm.iteration.filter

2019-05-19 Thread Christian Köstlin via Digitalmars-d-learn

On 19.05.19 20:38, Jacob Carlborg wrote:

On 2019-05-19 15:36, Christian Köstlin wrote:

Unfortunately I have no idea how to even store the result of this 
search in an attribute of ByMinimum, as I cannot writeout its type.


In general you can use `typeof()`, where `` is 
the expression you want to get the type of.



Thanks for the hint. The best I could come up with this is:

#!/usr/bin/env rdmd
import std.stdio;
import std.algorithm;
import std.typecons;
import std.array;
import std.range;
import std.traits;

auto byMinimum(Ranges)(Ranges ranges)
{
auto getNonEmpty()
{
return ranges.filter!("!a.empty");
}

auto minimumOfRanges(Ranges)(Ranges ranges) {
// dfmt off
return ranges
.map!(range => tuple!("range", "line")(range, range.front))
.minElement!("a.line");
// dfmt on
}
auto nonEmpty = getNonEmpty;
ReturnType!(minimumOfRanges!(typeof(nonEmpty))) minRangeAndLine;
struct ByMinimum(Ranges)
{
bool empty()
{
return nonEmpty.empty;
}

auto front()
{
minRangeAndLine = minimumOfRanges(nonEmpty);
return minRangeAndLine;
}

void popFront()
{
minRangeAndLine.range.popFront;
nonEmpty = getNonEmpty;
}
}
return ByMinimum!(Ranges)();
}

void main(string[] files)
{
foreach (n; files[1 .. $].map!(name => 
File(name).byLine(No.keepTerminator)).array.byMinimum)

{
writeln(n.line);
}
}


Still it looks a little clumsy. Any ideas?

--
Christian


How to get type returned by e.g. std.algorithm.iteration.filter

2019-05-19 Thread Christian Köstlin via Digitalmars-d-learn

I would like to join several sorted files into one big sorted file.

For that I came up with this snippet:

#!/usr/bin/env rdmd
import std.stdio;
import std.algorithm;
import std.typecons;
import std.array;
import std.range;

auto byMinimum(Ranges)(Ranges ranges)
{
auto getNonEmpty()
{
return ranges.filter!("!a.empty");
}

auto nonEmpty = getNonEmpty;
struct ByMinimum(Ranges)
{
bool empty()
{
return nonEmpty.empty;
}

auto front()
{
auto minRangeAndLine = nonEmpty.map!(range => tuple!("range",
"line")(range, range.front)).minElement!("a.line");
return minRangeAndLine;
}

void popFront()
{
auto minRangeAndLine = nonEmpty.map!(range => tuple!("range",
"line")(range, range.front)).minElement!("a.line");
minRangeAndLine.range.popFront;
nonEmpty = getNonEmpty;
}
}

return ByMinimum!(Ranges)();
}

void main(string[] files)
{
foreach (n; files[1 .. $].map!(name => 
File(name).byLine(No.keepTerminator)).array.byMinimum)

{
writeln(n.line);
}
}


I would like to get rid of the duplication that searches for the range 
with the next minimum in terms of runtime but also in terms of code 
written out. Unfortunately I have no idea how to even store the result 
of this search in an attribute of ByMinimum, as I cannot writeout its type.


Kind regards,
Christian



Re: Is it possible to return the subclass from a method of the parent class in dlang?

2018-03-02 Thread Christian Köstlin via Digitalmars-d-learn
>> class Timer : Thread {
>>    override Timer start() { ... }
>> }
>>
>> https://dlang.org/spec/function.html#virtual-functions
>>
>> (see item 6)
>>
>> -Steve
> Thanks for this.
> It works for me only without the override (with override I get
> Error: function timer.Timer.start does not override any function, did
> you mean to override 'core.thread.Thread.start'?).
This seems to be connected to Thread.start being a final function.



Re: Is it possible to return the subclass from a method of the parent class in dlang?

2018-03-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.03.18 21:39, Steven Schveighoffer wrote:
> On 3/2/18 3:23 PM, Christian Köstlin wrote:
>> To give an example:
>>
>> class Thread {
>>    ...
>>    Thread start() {...}
>> }
>>
>> class Timer : Thread {
>>    ...
>> }
>>
>>
>> void main() {
>>    // Timer timer = new Timer().start;  // this does not work
>>    auto timer = new Timer().start; // because timer is of type Thread
>> }
> 
> Yes:
> 
> class Timer : Thread {
>    override Timer start() { ... }
> }
> 
> https://dlang.org/spec/function.html#virtual-functions
> 
> (see item 6)
> 
> -Steve
Thanks for this.
It works for me only without the override (with override I get
Error: function timer.Timer.start does not override any function, did
you mean to override 'core.thread.Thread.start'?).

Although I wonder if its possible to "fix" this in the Thread class with
some dlang magic. e.g. traits
class Thread {
  traits(GetClass) start() {...}
}

or perhaps

class ThreadHelper(T) : Thread {
  override T start() {return cast(T)super.start();}
}
class Timer : Thread!Timer {
}


Is it possible to return the subclass from a method of the parent class in dlang?

2018-03-02 Thread Christian Köstlin via Digitalmars-d-learn
To give an example:

class Thread {
  ...
  Thread start() {...}
}

class Timer : Thread {
  ...
}


void main() {
  // Timer timer = new Timer().start;  // this does not work
  auto timer = new Timer().start; // because timer is of type Thread
}


thanks in advance,
christian



Re: Help optimizing UnCompress for gzipped files

2018-01-09 Thread Christian Köstlin via Digitalmars-d-learn
On 07.01.18 14:44, Steven Schveighoffer wrote:
> Not from what I'm reading, the C solution is about the same (257 vs.
> 261). Not sure if you have averaged these numbers, especially on a real
> computer that might be doing other things.
yes you are right ... for proper benchmarking proper statistics should
be in place, taking out extreme values, averaging them, ...

> Note: I would expect it to be a tiny bit faster, but not monumentally
> faster. From my testing with the reallocation, it only reallocates a
> large quantity of data once.
> 
> However, the D solution should be much faster. Part of the issue is that
> you still aren't low-level enough :)
> 
> Instead of allocating the ubyte array with this line:
> 
> ubyte[] buffer = new ubyte[200*1024*1024];
> 
> Try this instead:
> 
> // from std.array
> auto buffer = uninitializedArray!(ubyte[], 200*1024*1024);
thanks for that ... i just did not know how to get an uninitialized
array. i was aware, that dlang is nice and puts init there :)

> Yes! I am working on doing just that, but haven't had a chance to update
> the toy project I wrote: https://github.com/schveiguy/jsoniopipe
> 
> I was planning actually on having an iopipe of JsonItem, which would
> work just like a normal buffer, but reference the ubyte buffer underneath.
> 
> Eventually, the final product should have a range of JsonValue, which
> you would recurse into in order to parse its children. All of it will be
> lazy, and stream-based, so you don't have to load the whole file if it's
> huge.
> 
> Note, you can't have an iopipe of JsonValue, because it's a recursive
> format. JsonItems are just individual defined tokens, so they can be
> linear.
sounds really good. i played around with
https://github.com/mleise/fast/blob/master/source/fast/json.d ... thats
an interesting pull parser with the wrong licence unfortunately ... i
wonder if something like this could be done on top of iopipe instead of
a "real" buffer.

---
Christian Köstlin


Re: Help optimizing UnCompress for gzipped files

2018-01-06 Thread Christian Köstlin via Digitalmars-d-learn
On 05.01.18 23:04, Steven Schveighoffer wrote:
> On 1/5/18 3:09 PM, Christian Köstlin wrote:
>> On 05.01.18 15:39, Steven Schveighoffer wrote:
>>> Yeah, I guess most of the bottlenecks are inside libz, or the memory
>>> allocator. There isn't much optimization to be done in the main program
>>> itself.
>>>
>>> D compiles just the same as C. So theoretically you should be able to
>>> get the same performance with a ported version of your C code. It's
>>> worth a shot.
>> I added another version that tries to do the "same" as the c version
>> using mallocator, but i am still way off, perhaps its creating too many
>> ranges on the underlying array. but its around the same speed as your
>> great iopipe thing.
> 
> Hm... I think really there is some magic initial state of the allocator,
> and that's what allows it to go so fast.
> 
> One thing about the D version, because druntime is also using malloc
> (the GC is backed by malloc'd data after all), the initial state of the
> heap is quite different from when you start in C. It may be impossible
> or nearly impossible to duplicate the performance. But the flipside (if
> this is indeed the case) is that you won't see the same performance in a
> real-world app anyway, even in C.
> 
> One thing to try, you preallocate the ENTIRE buffer. This only works if
> you know how many bytes it will decompress to (not always possible), but
> it will take the allocator out of the equation completely. And it's
> probably going to be the most efficient method (you aren't leaving
> behind smaller unused blocks when you realloc). If for some reason we
> can't beat/tie the C version doing that, then something else is going on.
yes ... this is something i forgot to try out ... will do now :)
mhh .. interesting numbers ... c is even faster, my d lowlevel solution
is also a little bit faster, but much slower than the no copy version
(funnily, no copy is the wrong name, it just overwrites all the data in
a small buffer).

>> My solution does have the same memory leak, as I am not sure how to best
>> get the memory out of the FastAppender so that it is automagically
>> cleaned up. Perhaps if we get rc things, this gets easier?
> 
> I've been giving some thought to this. I think iopipe needs some buffer
> management primitives that allow you to finagle the buffer. I've been
> needing this for some time anyway (for file seeking). Right now, the
> buffer itself is buried in the chain, so it's hard to get at the actual
> buffer.
> 
> Alternatively, I probably also need to give some thought to a mechanism
> that auto-frees the memory when it can tell nobody is still using the
> iopipe. Given that iopipe's signature feature is direct buffer access,
> this would mean anything that uses such a feature would have to be unsafe.
yes .. thats tricky ...
one question about iopipe. is it possible to transform the elements in
the pipe as well ... e.g. away from a buffer of bytes to json objects?

--
Christian Köstlin



Re: Help optimizing UnCompress for gzipped files

2018-01-05 Thread Christian Köstlin via Digitalmars-d-learn
On 05.01.18 15:39, Steven Schveighoffer wrote:
> Yeah, I guess most of the bottlenecks are inside libz, or the memory
> allocator. There isn't much optimization to be done in the main program
> itself.
>
> D compiles just the same as C. So theoretically you should be able to
> get the same performance with a ported version of your C code. It's
> worth a shot.
I added another version that tries to do the "same" as the c version
using mallocator, but i am still way off, perhaps its creating too many
ranges on the underlying array. but its around the same speed as your
great iopipe thing.
My solution does have the same memory leak, as I am not sure how to best
get the memory out of the FastAppender so that it is automagically
cleaned up. Perhaps if we get rc things, this gets easier?
I updated: https://github.com/gizmomogwai/benchmarks/tree/master/gunzip
with the newest numbers on my machine, but I think your iopipe solution
is the best one we can get at the moment!

>> rust is doing quite well there
> 
> I'll say a few words of caution here:
> 
> 1. Almost all of these tests use the same C library to unzip. So it's
> really not a test of the performance of decompression, but the
> performance of memory management. And it appears that any test using
> malloc/realloc is in a different tier. Presumably because of the lack of
> copies (as discussed earlier).
> 2. Your rust test (I think, I'm not sure) is testing 2 things in the
> same run, which could potentially have dramatic consequences for the
> second test. For instance, it could already have all the required memory
> blocks ready, and the allocation strategy suddenly gets better. Or maybe
> there is some kind of caching of the input being done. I think you have
> a fairer test for the second option by running it in a separate program.
> I've never used rust, so I don't know what exactly your code is doing.
> 3. It's hard to make a decision based on such microbenchmarks as to
> which solution is "better" in an actual real-world program, especially
> when the state/usage of the memory allocator plays a huge role in this.
sure .. thats true




Re: Help optimizing UnCompress for gzipped files

2018-01-04 Thread Christian Köstlin via Digitalmars-d-learn
On 04.01.18 20:46, Steven Schveighoffer wrote:
> On 1/4/18 1:57 PM, Christian Köstlin wrote:
>> Thanks Steve,
>> this runs now faster, I will update the table.
> 
> Still a bit irked that I can't match the C speed :/
> 
> But, I can't get your C speed to duplicate on my mac even with gcc, so
> I'm not sure where to start. I find it interesting that you are not
> using any optimization flags for gcc.
I guess, the code in my program is small enough that the optimize flags
do not matter... most of the stuff is pulled from libz? Which is
dynamically linked against /usr/lib/libz.1.dylib.

I also cannot understand what I should do more (will try realloc with
Mallocator) for the dlang-low-level variant to get to the c speed.
rust is doing quite well there

--
Christian Köstlin



Re: Help optimizing UnCompress for gzipped files

2018-01-04 Thread Christian Köstlin via Digitalmars-d-learn
On 04.01.18 16:53, Steven Schveighoffer wrote:
> On 1/3/18 3:28 PM, Steven Schveighoffer wrote:
> 
>> Stay tuned, there will be updates to iopipe to hopefully make it as
>> fast in this microbenchmark as the C version :)
> 
> v0.0.3 has been released. To take advantage of using malloc/realloc, you
> can use std.experimental.allocator.mallocator.Mallocator as the
> BufferManager's allocator.
> 
> e.g.:
> 
> auto myPipe = openDev("file.gz").bufd // not here
>   .unzip!Mallocator;  // here
> 
> myPipe.ensureElems(); // this works now.
> 
> auto bytesRead = myPipe.window.length;
> 
> The reason you don't need it on the first bufd is because that is the
> buffer of the file stream, which isn't going to grow.
> 
> Note: the buffer manager does not free the data on destruction! You are
> responsible for freeing it (if you use Mallocator).
Thanks Steve,
this runs now faster, I will update the table.

Sorry, but I do not get how to cleanup the mallocated memory :)
Could you help me out here?

--
Christian Köstlin


Re: Help optimizing UnCompress for gzipped files

2018-01-04 Thread Christian Köstlin via Digitalmars-d-learn
I added now a c variant, that does always malloc/memcpy/free. Its much
slower for sure.
Also I put some output in thats shows when a real realloc happens. Its
like you said:

did a real realloc
did a real realloc
did a real realloc
did a real realloc
did a real realloc
did a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did not a real realloc
did a real realloc
did not a real realloc

funny thing is, i do not always get the same sequence of real reallocs.


--
Christian Köstlin


Re: Help optimizing UnCompress for gzipped files

2018-01-03 Thread Christian Köstlin via Digitalmars-d-learn
On 03.01.18 22:33, Steven Schveighoffer wrote:
> On 1/3/18 3:28 PM, Steven Schveighoffer wrote:
>> 1. The major differentiator between the C and D algorithms is the use
>> of C realloc. This one thing saves the most time. I'm going to update
>> iopipe so you can use it (stand by). I will also be examining how to
>> simulate using realloc when not using C malloc in iopipe. I think it's
>> the copying of data to the new buffer that is causing issues.
> 
> Looking at when C realloc actually moves the data, it appears it all of
> a sudden over-allocates very very large blocks, much larger than the GC
> will over-allocate. This is why the GC is losing. After a certain size,
> the GC doesn't allocate blocks to grow into, so we start copying on
> every realloc.
That a very intersting finding! If I find some time today, I will also
try a pure malloc/memcpy/free solution in c and also look how many real
allocs are done. Thats funny, because this means, that this whole *2
thing for growing buffers that you learn is well taken care of in libc?
Or does it still mean, that you should allocate in a pattern like this,
for libc's algorithm to kick in? Is there api to see how much realloc
really allocated, or is it only possible by comparing pointers to see if
realloc delivers a new or the old pointer?

--
Christian  Köstlin



Re: Help optimizing UnCompress for gzipped files

2018-01-03 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 14:51, Adam D. Ruppe wrote:
> On Tuesday, 2 January 2018 at 10:27:11 UTC, Christian Köstlin wrote:
>> After this I analyzed the first step of the process (gunzipping the
>> data from a file to memory), and found out, that dlangs UnCompress is
>> much slower than java, and ruby and plain c.
> 
> Yeah, std.zlib is VERY poorly written. You can get much better
> performance by just calling the C functions yourself instead. (You can
> just import import etc.c.zlib; it is included still)
> 
> Improving it would mean changing the public API. I think the one-shot
> compress/uncompress functions are ok, but the streaming class does a lot
> of unnecessary work inside like copying stuff around.
I added a version that uses the gzip lowlevel apis (similar to my
example c program). I am still having problems copying the data fast
enough to an dlang array.

please see the updated page:
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

--
Christian Köstlin



Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 21:13, Steven Schveighoffer wrote:
> Well, you don't need to use appender for that (and doing so is copying a
> lot of the data an extra time). All you need is to extend the pipe until
> there isn't any more new data, and it will all be in the buffer.
> 
> // almost the same line from your current version
> auto mypipe = openDev("../out/nist/2011.json.gz")
>   .bufd.unzip(CompressionFormat.gzip);
> 
> // This line here will work with the current release (0.0.2):
> while(mypipe.extend(0) != 0) {}
Thanks for this input, I updated the program to make use of this method
and compare it to the appender thing as well.

>> I will give the direct gunzip calls a try ...
I added direct gunzip calls as well... Those are really good, as long as
I do not try to get the data into ram :) then it is "bad" again.
I wonder what the real difference to the lowlevel solution with own
appender and the c version is. For me they look almost the same (ugly,
only the performance seems to be nice).

Funny thing is, that if I add the clang address sanitizer things to the
c program, I get almost the same numbers as for java :)


> Yeah, with jsoniopipe being very raw, I wouldn't be sure it was usable
> in your case. The end goal is to have something fast, but very easy to
> construct. I wasn't planning on focusing on the speed (yet) like other
> libraries do, but ease of writing code to use it.
> 
> -Steve

--
Christian Köstlin


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 21:48, Steven Schveighoffer wrote:
> On 1/2/18 3:13 PM, Steven Schveighoffer wrote:
>> // almost the same line from your current version
>> auto mypipe = openDev("../out/nist/2011.json.gz")
>>    .bufd.unzip(CompressionFormat.gzip);
> 
> Would you mind telling me the source of the data? When I do get around
> to it, I want to have a good dataset to test things against, and would
> be good to use what others reach for.
> 
> -Steve
Hi Steve,

thanks for looking into this.
I use data from nist.gov, the Makefile includes these download instructions:
curl -s
https://static.nvd.nist.gov/feeds/json/cve/1.0/nvdcve-1.0-2011.json.gz >
out/nist/2011.json.gz`

--
Christian Köstlin



Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 15:09, Steven Schveighoffer wrote:
> On 1/2/18 8:57 AM, Adam D. Ruppe wrote:
>> On Tuesday, 2 January 2018 at 11:22:06 UTC, Stefan Koch wrote:
>>> You can make it much faster by using a sliced static array as buffer.
>>
>> Only if you want data corruption! It keeps a copy of your pointer
>> internally: https://github.com/dlang/phobos/blob/master/std/zlib.d#L605
>>
>> It also will always overallocate new buffers on each call
>> 
>>
>> There is no efficient way to use it. The implementation is substandard
>> because the API limits the design.
> 
> iopipe handles this quite well. And deals with the buffers properly
> (yes, it is very tricky. You have to ref-count the zstream structure,
> because it keeps internal pointers to *itself* as well!). And no, iopipe
> doesn't use std.zlib, I use the etc.zlib functions (but I poached some
> ideas from std.zlib when writing it).
> 
> https://github.com/schveiguy/iopipe/blob/master/source/iopipe/zip.d
> 
> I even wrote a json parser for iopipe. But it's far from complete. And
> probably needs updating since I changed some of the iopipe API.
> 
> https://github.com/schveiguy/jsoniopipe
> 
> Depending on the use case, it might be enough, and should be very fast.
> 
> -Steve
Thanks Steve for this proposal (actually I already had an iopipe version
on my harddisk that I applied to this problem) Its more or less your
unzip example + putting the data to an appender (I hope this is how it
should be done, to get the data to RAM).

iopipe is already better than the normal dlang version, almost like
java, but still far from the solution. I updated
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

I will give the direct gunzip calls a try ...

In terms of json parsing, I had really nice results with the fast.json
pull parser, but its comparing a little bit apples with oranges, because
I did not pull out all the data there.

---
Christian


Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
Hi all,

over the holidays, I played around with processing some gzipped json
data. First version was implemented in ruby, but took too long, so I
tried, dlang. This was already faster, but not really satisfactory fast.
Then I wrote another version in java, which was much faster.

After this I analyzed the first step of the process (gunzipping the data
from a file to memory), and found out, that dlangs UnCompress is much
slower than java, and ruby and plain c.

There was some discussion on the forum a while ago:
http://forum.dlang.org/thread/pihxxhjgnveulcdta...@forum.dlang.org

The code I used and the numbers I got are here:
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

I used an i7 macbook with os x 10.13.2, ruby 2.5.0 built via rvm,
python3 installed by homebrew, builtin clang compiler, ldc-1.7.0-beta1,
java 1.8.0_152.

Is there anything I can do to speed up the dlang stuff?

Thanks in advance,
Christian


Re: Behavior of joining mapresults

2017-12-21 Thread Christian Köstlin via Digitalmars-d-learn
On 21.12.17 08:41, Jonathan M Davis wrote:
> I would think that it would make a lot more sense to simply put the whole
> thing in an array than to use memoize. e.g.
> 
> auto arr = iota(1, 5).map!parse().array();
thats also possible, but i wanted to make use of the laziness ... e.g.
if i then search over the flattened stuff, i do not have to parse the
10th file.
i replaced joiner by a primitive flatten function like this:
#!/usr/bin/env rdmd -unittest
unittest {
import std.stdio;
import std.range;
import std.algorithm;
import std.string;
import std.functional;

auto parse(int i) {
writeln("parsing %s".format(i));
return [1, 2, 3];
}

writeln(iota(1, 5).map!(parse));
writeln("---");
writeln((iota(1, 5).map!(parse)).joiner);
writeln("---");
writeln((iota(1, 5).map!(memoize!parse)).joiner);
writeln("---");
writeln((iota(1, 5).map!(parse)).flatten);
}

auto flatten(T)(T input) {
import std.range;
struct Res {
T input;
ElementType!T current;
this(T input) {
this.input = input;
this.current = this.input.front;
advance();
}
private void advance() {
while (current.empty) {
if (input.empty) {
return;
}
input.popFront;
if (input.empty) {
return;
}
current = input.front;
}
}

bool empty() {
return current.empty;
}
auto front() {
return current.front;
}

void popFront() {
current.popFront;
advance();
}

}
return Res(input);
}

void main() {}

With this implementation my program behaves as expected (parsing the
input data only once).



Re: Behavior of joining mapresults

2017-12-20 Thread Christian Köstlin via Digitalmars-d-learn
On 20.12.17 17:30, Christian Köstlin wrote:
> thats an idea, thank a lot, will give it a try ...
#!/usr/bin/env rdmd -unittest
unittest {
import std.stdio;
import std.range;
import std.algorithm;
import std.string;
import std.functional;

auto parse(int i) {
writeln("parsing %s".format(i));
return [1, 2, 3];
}

writeln(iota(1, 5).map!(memoize!parse));
writeln("---");
writeln((iota(1, 5).map!(memoize!parse)).joiner);
}

void main() {}

works, but i fear for the data that is stored in the memoization. at the
moment its not a big issue, as all the data fits comfortable into ram,
but for bigger data another approach is needed (probably even my current
json parsing must be exchanged).

I still wonder, if the joiner calls front more often than necessary. For
sure its valid to call front as many times as one sees fit, but with a
lazy map in between, it might not be the best solution.



Re: Behavior of joining mapresults

2017-12-20 Thread Christian Köstlin via Digitalmars-d-learn
On 20.12.17 17:19, Stefan Koch wrote:
> On Wednesday, 20 December 2017 at 15:28:00 UTC, Christian Köstlin wrote:
>> When working with json data files, that we're a little bigger than
>> convenient I stumbled upon a strange behavior with joining of mapresults
>> (I understand that this is more or less flatmap).
>> I mapped inputfiles, to JSONValues, from which I took out some arrays,
>> whose content I wanted to join.
>> Although the joiner is at the end of the functional pipe, it led to
>> calling of the parsing code twice.
>> I tried to reduce the problem:
>>
>> [...]
> 
> you need to memorize I guess, map is lazy.
thats an idea, thank a lot, will give it a try ...



Behavior of joining mapresults

2017-12-20 Thread Christian Köstlin via Digitalmars-d-learn
When working with json data files, that we're a little bigger than
convenient I stumbled upon a strange behavior with joining of mapresults
(I understand that this is more or less flatmap).
I mapped inputfiles, to JSONValues, from which I took out some arrays,
whose content I wanted to join.
Although the joiner is at the end of the functional pipe, it led to
calling of the parsing code twice.
I tried to reduce the problem:

#!/usr/bin/env rdmd -unittest
unittest {
import std.stdio;
import std.range;
import std.algorithm;
import std.string;

auto parse(int i) {
writeln("parsing %s".format(i));
return [1, 2, 3];
}

writeln(iota(1, 5).map!(parse));
writeln("---");
writeln((iota(1, 5).map!(parse)).joiner);
}

void main() {}

As you can see if you run this code, parsing 1,..5 is called two times
each. What am I doing wrong here?

Thanks in advance,
Christian



Re: For fun: Expressive C++ 17 Coding Challenge in D

2017-10-15 Thread Christian Köstlin via Digitalmars-d-learn
Another solution using dlangs builtin csv support for reading.

import std.csv;
import std.file;
import std.algorithm : map;
import std.range;

string csvWrite(Header, Rows)(Header header, Rows rows)
{
return header.join(",") ~ "\n" ~ rows.map!(r => header.map!(h =>
r[h]).join(",")).join("\n");
}

int main(string[] args)
{
auto inputFile = args[1];
auto columnName = args[2];
auto replacement = args[3];
auto outputFile = args[4];

auto records = readText(inputFile).csvReader!(string[string])(null);
write(outputFile, csvWrite(records.header, records.map!((r) {
r[columnName] = replacement;
return r;
})));
return 0;
}

Unfortunately this is still far from the powershell solution :/

cK


Whats the most common formatting style for dlang?

2017-07-19 Thread Christian Köstlin via Digitalmars-d-learn
Until now I formatted my toy-programs how I like it. Checking up on
dfmt I saw that these deviate from the default settings of dfmt.
Does dfmt's default settings reflect the most common style for dlang?
Actually I really like all languages, that take out the whole discussion
about formatting (like go and to a certain degree python).


Re: Fiber based UI-Toolkit

2017-07-10 Thread Christian Köstlin via Digitalmars-d-learn
On 10.07.17 15:37, Gerald wrote:
> On Sunday, 9 July 2017 at 19:43:14 UTC, Christian Köstlin wrote:
>> I wonder if there is any fiber based / fiber compatible UI-Toolkit out
>> for dlang. The second question is, if it would make sense at all to
>> have such a thing?
> 
> As previously noted, like other UI toolkits GTK maintains a single
> thread for processing UI events with an event loop running in that
> thread. GTK does support passing a function to be called when the main
> loop is idle, it could be possible to leverage this to manage fibers
> with appropriate yielding.
> 
> Having said that, I'm in the camp where this doesn't make much sense.
> Using fibers on the main UI thread is likely going to result in a
> blocked UI whenever a fiber takes too long to do its work. History has
> shown that cooperative multi-tasking typically doesn't work well for UI
> applications.
> 
> I think you would be much better off starting an additional thread and
> managing fibers in that thread outside the context of the main UI
> thread. You can then use things like std.concurrency to receive messages
> from the external thread to update the UI as needed in it's own thread.
Thanks for this answer,

my thinking was also in this direction.
I guess for many use cases fibers in ui could be good enough (like my
simple example, getting something from a webserver, (quickly) process it
and display it). given a fiber-based http-client, this could work quite
nicely. For real programs, with heavy transformation of the data,
I think this would not work so well, because either you block the ui
with the processing code in the fiber, or your processing code is not
running full speed, because it has some yields sprinkled throughout the
code.

On the other hand side, i also guess, that for ui and even more for
audio rendering, every tiny bit of delay can hurt the experience.

So probably not a good idea for a real world application?

Best regards,
Christian

p.s. i still wonder about jacobs argument about microsofts async/await.



Re: Fiber based UI-Toolkit

2017-07-10 Thread Christian Köstlin via Digitalmars-d-learn
On 10.07.17 00:23, Christian Köstlin wrote:
To elaborate on the previous post, I uploaded a small example, that
tries naively to mix dlangui with fibers. Please have a look at:
https://github.com/gizmomogwai/fibered-ui.git
For sure it does not work. The fiber in the callback is started,
but after the first yield, dlangui never returns to the fiber.
Does anybody know what needs to be done for that?

thanks a lot,
Christian

> On 09.07.17 23:12, bauss wrote:
>> On Sunday, 9 July 2017 at 19:43:14 UTC, Christian Köstlin wrote:
>>> I wonder if there is any fiber based / fiber compatible UI-Toolkit out
>>> for dlang. The second question is, if it would make sense at all to
>>> have such a thing?
>>>
>>> christian
>>
>> It doesn't really make sense to have that, because most (if not all)
>> operating systems only allow rendering from a single thread and I
>> believe OSX (possibly macOS too.) only allows it from the main thread.
>> Which means the only thing you can really operate on other threads are
>> events, but you'll always have to do callbacks to your UI thread in
>> order to render.
> Thanks for answering! you are touching exactly my question:
> Lets say, that all the event handling is done by fiber-aware code (means
> all io gives the thread free, when it would block, and perhaps
> a yield function for calculation heavy operations). It would then
> I think reduce the "risk" of a ANR (Application not responding (from
> android) or the famous beachball) without sacrificing the clarity of the
> code.
> 
> e.g. you want to download something from a webpage and process the data,
> if you click a button. you cannot do this in the buttons-onclick
> callback (because this is usually a long running operation and the
> callback is called from the main thread). with fibers, the main thread
> could continue running (and update the screen) as soon as the io thread
> is blocking or the process thread calls yield (which he should on a
> regular basis). after the processing as soon as the fiber gets back the
> control, the result can easily be integrated back into the ui, because
> its already in the right thread.
> compare this with the traditionally apporach of spawning a new thread,
> passing over the arguments, processing it, passing back the result and
> integrating this into the ui, it could perhaps be "simpler".
> 
> on the other hand, the process code would get a little bit messy because
> of the manually inserted yields (as far as i know, the erlang vm for
> example inserts such instructions automatically every n instructions).
> 
> what do you think?
> 



Re: Fiber based UI-Toolkit

2017-07-09 Thread Christian Köstlin via Digitalmars-d-learn
On 09.07.17 23:12, bauss wrote:
> On Sunday, 9 July 2017 at 19:43:14 UTC, Christian Köstlin wrote:
>> I wonder if there is any fiber based / fiber compatible UI-Toolkit out
>> for dlang. The second question is, if it would make sense at all to
>> have such a thing?
>>
>> christian
> 
> It doesn't really make sense to have that, because most (if not all)
> operating systems only allow rendering from a single thread and I
> believe OSX (possibly macOS too.) only allows it from the main thread.
> Which means the only thing you can really operate on other threads are
> events, but you'll always have to do callbacks to your UI thread in
> order to render.
Thanks for answering! you are touching exactly my question:
Lets say, that all the event handling is done by fiber-aware code (means
all io gives the thread free, when it would block, and perhaps
a yield function for calculation heavy operations). It would then
I think reduce the "risk" of a ANR (Application not responding (from
android) or the famous beachball) without sacrificing the clarity of the
code.

e.g. you want to download something from a webpage and process the data,
if you click a button. you cannot do this in the buttons-onclick
callback (because this is usually a long running operation and the
callback is called from the main thread). with fibers, the main thread
could continue running (and update the screen) as soon as the io thread
is blocking or the process thread calls yield (which he should on a
regular basis). after the processing as soon as the fiber gets back the
control, the result can easily be integrated back into the ui, because
its already in the right thread.
compare this with the traditionally apporach of spawning a new thread,
passing over the arguments, processing it, passing back the result and
integrating this into the ui, it could perhaps be "simpler".

on the other hand, the process code would get a little bit messy because
of the manually inserted yields (as far as i know, the erlang vm for
example inserts such instructions automatically every n instructions).

what do you think?



Fiber based UI-Toolkit

2017-07-09 Thread Christian Köstlin via Digitalmars-d-learn
I wonder if there is any fiber based / fiber compatible UI-Toolkit out
for dlang. The second question is, if it would make sense at all to have
such a thing?

christian


std.concurrency and sendWithDelay

2017-06-26 Thread Christian Köstlin via Digitalmars-d-learn
I really like the std.concurrency
(https://dlang.org/phobos/std_concurrency.html) with spawn, send,
receive ...

Is there a builtin way to schedule core or an event after a delay (e.g.
in android:
https://developer.android.com/reference/android/os/Handler.html#postDelayed(java.lang.Runnable,
long) or
https://developer.android.com/reference/android/os/Handler.html#sendMessageDelayed(android.os.Message,
long) ).

thanks in advance,
christian koestlin


Re: index of ddocs

2017-03-06 Thread Christian Köstlin via Digitalmars-d-learn
On 06/03/2017 11:29, rikki cattermole wrote:
> On 06/03/2017 11:25 PM, Christian Köstlin wrote:
>> Hi,
>>
>> I have a small dub-based application project with several modules (it's
>> not a vibe.d project). I can easily create ddocs for the modules by
>> running dub build --build=docs.
>>
>> I am missing at the moment a page, that shows the contents of the whole
>> package. Did I miss something here?
>>
>> Best regards,
>> Christian
> 
> Nope, DDOC doesn't do that for you[0].
> 
> [0] https://github.com/dlang/phobos/blob/master/index.d
> 
thanks ... will do accordingly :)


index of ddocs

2017-03-06 Thread Christian Köstlin via Digitalmars-d-learn
Hi,

I have a small dub-based application project with several modules (it's
not a vibe.d project). I can easily create ddocs for the modules by
running dub build --build=docs.

I am missing at the moment a page, that shows the contents of the whole
package. Did I miss something here?

Best regards,
Christian


Re: How to enforce compile time evaluation (and test if it was done at compile time)

2017-03-01 Thread Christian Köstlin via Digitalmars-d-learn
On 01/03/2017 00:09, Joseph Rushton Wakeling wrote:
> On Tuesday, 28 February 2017 at 00:22:28 UTC, sarn wrote:
>>> If you ever have doubts, you can always use something like this to
>>> check:
>>>
>>> assert (__ctfe);
>>
>> Sorry, "enforce" would more appropriate if you're really checking.
> 
> if (!__ctfe) assert(false);
> 
> ... might be the best option.  That shouldn't be compiled out even in
> -release builds.
thats a nice idea! is this happening because of assert(false) being
always part of release builds (as mentioned here:
https://dlang.org/spec/contracts.html#assert_contracts) or because the
if would have no instructions anymore if this is removed.

cK



Re: How to enforce compile time evaluation (and test if it was done at compile time)

2017-02-27 Thread Christian Köstlin via Digitalmars-d-learn
On 28/02/2017 01:20, sarn wrote:
> On Monday, 27 February 2017 at 19:26:06 UTC, Christian Köstlin wrote:
>> How can I make sure, that the calculations are done at compile time?
> 
> If you ever have doubts, you can always use something like this to check:
> 
> assert (__ctfe);
Thanks a lot, actually works as you describe it!
As I understand the only difference between assert and enforce is, that
assert is not compiled into releases?

Thanks!
Christian



How to enforce compile time evaluation (and test if it was done at compile time)

2017-02-27 Thread Christian Köstlin via Digitalmars-d-learn
I have a small example, that can be used to express 3601000ms as 1h 1s
(a much more advanced version has already been done by
https://github.com/nordlow/units-d).

I would like to enforce that the precomputation (multiplying and
inverting the list of scale's) is done at compile time.

Is it enough to put up static immutable modifiers?
How can I make sure, that the calculations are done at compile time?

Thanks in advance,
Christian


public struct Unit {
  import std.algorithm.iteration;
  import std.range;

  public struct Scale {
string name;
long factor;
  }

  public struct Part {
string name;
long v;
string toString() {
  import std.conv;
  return v.to!(string) ~ name;
}
  }

  private string name;
  private Scale[] scales;

  public this(string name, Scale[] scales) {
this.name = name;
this.scales = cumulativeFold!((result,x) => Scale(x.name,
result.factor * x.factor))(scales).array.retro.array;
  }

  public Part[] transform(long v) immutable {
import std.array;

auto res = appender!(Part[]);
auto tmp = v;
foreach (Scale scale; scales) {
  auto h = tmp / scale.factor;
  tmp = v % scale.factor;
  res.put(Part(scale.name, h));
}
return res.data;
  }
}

Unit.Part[] onlyRelevant(Unit.Part[] parts) {
  import std.array;
  auto res = appender!(Unit.Part[]);
  bool needed = false;
  foreach (part; parts) {
if (needed || (part.v > 0)) {
  needed = true;
}
if (needed) {
  res.put(part);
}
  }
  return res.data;
}

Unit.Part[] mostSignificant(Unit.Part[] parts, long nr) {
  import std.algorithm.comparison;
  auto max = min(parts.length, nr);
  return parts[0..max];
}

unittest {
  static immutable time = Unit("time", [Unit.Scale("ms", 1),
Unit.Scale("s", 1000), Unit.Scale("m", 60), Unit.Scale("h", 60),
Unit.Scale("d", 24)]);

  auto res = time.transform(1 + 2*1000 + 3*1000*60 + 4*1000*60*60 + 5 *
1000*60*60*24);
  res.length.shouldEqual(5);
  res[0].name.shouldEqual("d");
  res[0].v.shouldEqual(5);
  res[1].name.shouldEqual("h");
  res[1].v.shouldEqual(4);
  res[2].name.shouldEqual("m");
  res[2].v.shouldEqual(3);
  res[3].name.shouldEqual("s");
  res[3].v.shouldEqual(2);
  res[4].name.shouldEqual("ms");
  res[4].v.shouldEqual(1);

  res = time.transform(2001).onlyRelevant;
  res.length.shouldEqual(2);
  res[0].name.shouldEqual("s");
  res[0].v.shouldEqual(2);
  res[1].name.shouldEqual("ms");
  res[1].v.shouldEqual(1);

  res = time.transform(2001).onlyRelevant.mostSignificant(1);
  res.length.shouldEqual(1);
  res[0].name.shouldEqual("s");
  res[0].v.shouldEqual(2);
}


Re: std.experimental.logger + threadIds

2016-12-19 Thread Christian Köstlin via Digitalmars-d-learn
On 19/12/2016 21:32, Robert burner Schadek wrote:
> The ugly way is to create a @trusted function/lambda that coverts the
> threadId to a string.
> 
> Not sure about the pretty way.
thanks a lot. works good enough, just for reference, I added:

string tid2string(Tid id) @trusted {
  import std.conv : text;
  return text(id);
}




std.experimental.logger + threadIds

2016-12-19 Thread Christian Köstlin via Digitalmars-d-learn
I am experimenting with the logger interface and want to write a custom
logger, that also outputs the threadID or Tid of the LogEntries.
The documentation shows how to do a custom logger, but I am unable to
convert the threadId to a string, because all conversion functions are
not @safe.

Is there a way around this?

Thanks in advance,
Christian


Re: Multiple producer - multiple consumer with std.concurrency?

2016-12-08 Thread Christian Köstlin via Digitalmars-d-learn
Hi Ali,

Thanks for the input, will read about this.
About the slowness. I think it also depends on the situation.
Sure, every message from all producers/consumers has to go through one
MessageBox, but, that is a small critical section, if the work for
producing and consuming takes long enough, the contention is not so
important (for small desktop sized problems). I would still be able to
stress all my cores :)

About a single dispatcher: Why not. But the dispatcher would have to
keep track (manually) of tasks to do and unoccupied workers. While this
is for sure possible, simply sharing the MessageBox would be a great
solutions I think. Looking into std.concurrency, it seems that it should
be possible to provide another spawn function, that takes an additional
MessageBox parameter that is then used for creating the Tid (given that
get and put on MessageBox are threadsafe).

What do you think?

Christian

On 08/12/2016 00:06, Ali Çehreli wrote:
> The simplest idea is to have a single dispatcher thread that distributes
> to consumers. However, both that and other shared mailbox designs are
> inherently slow due to contention on this single mailbox.
> 
> Sean Parent's "Better Code: Concurrency" presentation does talk about
> that topic and tells how "task stealing" is a solution.
> 
> There are other concurrency models out there like the Disruptor:
> 
>   https://lmax-exchange.github.io/disruptor/
> 
> It's a very interesting read but I don't know how it can be done with
> Phobos. It would be awesome if D had that solution.
> 
> Ali
> 



Multiple producer - multiple consumer with std.concurrency?

2016-12-07 Thread Christian Köstlin via Digitalmars-d-learn
I really like std.concurrency for message passing style coding.

Today I thought about a scenario where I need a multiple producer,
multiple consumer pattern.
The multiple producer is easily covered by std.concurrency, because
all producers just can send to one Tid.
The tricky part is the multiple consumer part. At the moment I do not
see a way for several Tid's to share the same mailbox. Did I miss
something, or how would you implement this paradigm?

thanks in advance,
christian


How to get the name for a Tid

2016-11-23 Thread Christian Köstlin via Digitalmars-d-learn
std.concurrency contains the register function to associate a name with
a Tid. This is stored internally in an associative array namesByTid.
I see no accessors for this. Is there a way to get to the associated
names of a Tid?

Thanks,
Christian


Re: Sending Tid in a struct

2016-11-23 Thread Christian Köstlin via Digitalmars-d-learn
On 03/03/2012 18:35, Timon Gehr wrote:
> On 03/03/2012 12:09 PM, Nicolas Silva wrote:
>> Hi,
>>
>> I'm trying to send structs using std.concurrency. the struct contains
>> a Tid (the id of the sender) so that the receiver can send an answer.
>>
>> say:
>>
>> struct Foo
>> {
>>Tid tid;
>>string str;
>> }
>>
>> // ...
>>
>> Foo f = {
>>tid: thisTid,
>>str: "hello!"
>> };
>> std.concurrency.send(someThread, f);
>> // /usr/include/d/dmd/phobos/std/concurrency.d(465): Error: static
>> assert  "Aliases to mutable thread-local data not allowed."
>> // hello.d(15):instantiated from here: send!(Foo)
>>
>> However, I can send a Tid if I pass it directly as a parameter of the
>> send function instead of passing it within a struct.
>>
>> Is this a bug ? It looks like so to me but I guess I could have missed
>> something.
>>
>> thanks in advance,
>>
>> Nicolas
> 
> Yes, this seems to be a bug.
> 
> Workaround:
> 
> struct Foo{
> string s;
> Tid id;
> }
> 
> void foo(){
> Foo foo;
> receive((Tuple!(string,"s",Tid,"id") bar){foo=Foo(bar.s,bar.id);});
> }
> 
> void main(){
> auto id = spawn();
> id.send("string",id);
> ...
> }
I had a similar problem with this an it seems this is still a bug with
dmd 2.072.

best regards,
christian




Re: Speed of synchronized

2016-10-18 Thread Christian Köstlin via Digitalmars-d-learn
On 18/10/16 07:04, Daniel Kozak via Digitalmars-d-learn wrote:
> dub run --build=release --compiler=ldc
on my machine i get the following output (using ldc2)
ldc2 --version  09:32
LDC - the LLVM D compiler (1.0.0):
  based on DMD v2.070.2 and LLVM 3.8.1
  built with LDC - the LLVM D compiler (0.17.1)
  Default target: x86_64-apple-darwin15.6.0
  Host CPU: haswell
  http://dlang.org - http://wiki.dlang.org/LDC

  Registered Targets:
amdgcn  - AMD GCN GPUs
arm - ARM
armeb   - ARM (big endian)
nvptx   - NVIDIA PTX 32-bit
nvptx64 - NVIDIA PTX 64-bit
r600- AMD GPUs HD2XXX-HD6XXX
thumb   - Thumb
thumbeb - Thumb (big endian)
x86 - 32-bit X86: Pentium-Pro and above
x86-64  - 64-bit X86: EM64T and AMD64


dub test --compiler=ldc2 (my unittest configuration now includes the
proper release flags thanks to sönke).
No source files found in configuration 'library'. Falling back to "dub
-b unittest".
Performing "unittest" build using ldc2 for x86_64.
05-threads ~master: building configuration "application"...
source/app.d(18): Deprecation: read-modify-write operations are not
allowed for shared variables. Use
core.atomic.atomicOp!"+="(this.counter, 1) instead.
source/app.d(28): Deprecation: read-modify-write operations are not
allowed for shared variables. Use
core.atomic.atomicOp!"+="(this.counter, 1) instead.
source/app.d(43): Deprecation: read-modify-write operations are not
allowed for shared variables. Use
core.atomic.atomicOp!"+="(this.counter, 1) instead.
Running ./05-threads
app.AtomicCounter: got: 100 expected: 100 in 21 ms, 692 μs, and
6 hnsecs
app.ThreadSafe1Counter: got: 100 expected: 100 in 3 secs, 909
ms, 137 μs, and 3 hnsecs
app.ThreadSafe2Counter: got: 100 expected: 100 in 3 secs, 724
ms, 201 μs, and 9 hnsecs
app.ThreadUnsafeCounter: got: 759497 expected: 100 in 8 ms, 841 μs,
and 9 hnsecs
from example got: 3 secs, 840 ms, 387 μs, and 2 hnsecs




looks similar to me.

thanks christian



Re: Speed of synchronized

2016-10-17 Thread Christian Köstlin via Digitalmars-d-learn
On 17/10/16 14:44, Christian Köstlin wrote:
> On 17/10/16 14:09, Daniel Kozak via Digitalmars-d-learn wrote:
>> Dne 16.10.2016 v 10:41 Christian Köstlin via Digitalmars-d-learn napsal(a):
>>> Hi,
>>>
>>> for an exercise I had to implement a thread safe counter.
>>> This is what I came up with:
>>> 
>>>
>>> btw. I run the code with dub run --build=release
>>>
>>> Thanks in advance,
>>> Christian
>> So I have done some testing, on my pc:
>> Java result
>> counter.AtomicLongCounter@7ff5e7d8 expected: 200 got: 100 in: 83ms
>> counter.ThreadSafe2Counter@59b44e4b expected: 200 got: 100 in: 77ms
>> counter.ThreadSafe1Counter@2e5f6b4b expected: 200 got: 100 in:
>> 154ms
>> counter.ThreadUnsafeCounter@762b155d expected: 200 got: 730428 in: 13ms
>>
>> and my D results (code: http://dpaste.com/3QFXACY ):
>> snip.AtomicCounter: got: 100 expected: 100 in 77 ms and 783 μs
>> snip.ThreadSafe1Counter: got: 100 expected: 100 in 287 ms, 727
>> μs, and 3 hnsecs
>> snip.ThreadSafe2Counter: got: 100 expected: 100 in 281 ms, 117
>> μs, and 1 hnsec
>> snip.ThreadSafe3Counter: got: 100 expected: 100 in 158 ms, 480
>> μs, and 2 hnsecs
>> snip.ThreadUnsafeCounter: got: 100 expected: 100 in 6 ms, 682
>> μs, and 1 hnsec
>>
>> so atomic is same as in Java pthread_mutex is same speed as java
>> synchronized
>> D mutexes and D synchronized are almost same, I belive that if I could
>> setup same attrs as in pthread version it will be around 160ms too.
>>
>> Unsafe is almost same for D and java. Only java ReentrantLock seems to
>> work better. I believe there is some trick, so it will end up not using
>> mutexes in the end at all. For example consider this change in D code:
>>
>> void doIt(alias counter)() {
>>   auto thg = new ThreadGroup();
>>   for (int i=0; i<NR_OF_THREADS; ++i) {
>>  thg.create(!(counter));
>>   }
>>   thg.joinAll();
>> }
>>
>> change it to
>>
>> void doIt(alias counter)() {
>>   auto thg = new ThreadGroup();
>>   for (int i=0; i<NR_OF_THREADS; ++i) {
>> auto tc = thg.create(!(counter));
>> tc.join();
>>   }
>> }
>>
>> and results are:
>>
>> snip.AtomicCounter: got: 100 expected: 100 in 22 ms, 251 μs, and
>> 6 hnsecs
>> snip.ThreadSafe1Counter: got: 100 expected: 100 in 46 ms, 146
>> μs, and 3 hnsecs
>> snip.ThreadSafe2Counter: got: 100 expected: 100 in 44 ms, 961
>> μs, and 5 hnsecs
>> snip.ThreadSafe3Counter: got: 100 expected: 100 in 42 ms, 512
>> μs, and 8 hnsecs
>> snip.ThreadUnsafeCounter: got: 100 expected: 100 in 2 ms, 108
>> μs, and 5 hnsecs
>>
>>
>>
>>
>>
> thank you for looking into it.
> this seems to be quite good.
> I did expect something in those lines, but got the mentioned numbers on
> my os x macbook. perhaps its a os x glitch.
> 
Thanks for the hint about the OS. I rerun the tests on a linux machine,
and there everything is fine!
linux dlang code:
app.AtomicCounter: got: 100 expected: 100 in 24 ms, 387 μs, and
3 hnsecs
app.ThreadSafe1Counter: got: 100 expected: 100 in 143 ms, 534
μs, and 9 hnsecs
app.ThreadSafe2Counter: got: 100 expected: 100 in 159 ms, 685
μs, and 1 hnsec
app.ThreadUnsafeCounter: got: 399937 expected: 100 in 9 ms and 556 μs
from example got: 156 ms, 198 μs, and 9 hnsecs


linux java code:
counter.CounterTest > testAtomicIntCounter STANDARD_OUT
counter.AtomicIntCounter@1f2a2347 expected: 100 got: 100 in:
29ms

counter.CounterTest > testAtomicLongCounter STANDARD_OUT
counter.AtomicLongCounter@675ad891 expected: 100 got: 100
in: 24ms

counter.CounterTest > testThreadSafe2Counter STANDARD_OUT
counter.ThreadSafe2Counter@3043c6d2 expected: 100 got: 100
in: 38ms

counter.CounterTest > testThreadSafeCounter STANDARD_OUT
counter.ThreadSafe1Counter@bac4ba3 expected: 100 got: 100
in: 145ms

counter.CounterTest > testThreadUnsafeCounter STANDARD_OUT
counter.ThreadUnsafeCounter@2fe82bf8 expected: 100 got: 433730
in: 9ms


Could someone check the numbers on another OS-X machine? Unfortunately I
only have one available.

Thanks in advance!



Re: Speed of synchronized

2016-10-17 Thread Christian Köstlin via Digitalmars-d-learn
On 17/10/16 14:09, Daniel Kozak via Digitalmars-d-learn wrote:
> Dne 16.10.2016 v 10:41 Christian Köstlin via Digitalmars-d-learn napsal(a):
>> Hi,
>>
>> for an exercise I had to implement a thread safe counter.
>> This is what I came up with:
>> 
>>
>> btw. I run the code with dub run --build=release
>>
>> Thanks in advance,
>> Christian
> So I have done some testing, on my pc:
> Java result
> counter.AtomicLongCounter@7ff5e7d8 expected: 200 got: 100 in: 83ms
> counter.ThreadSafe2Counter@59b44e4b expected: 200 got: 100 in: 77ms
> counter.ThreadSafe1Counter@2e5f6b4b expected: 200 got: 100 in:
> 154ms
> counter.ThreadUnsafeCounter@762b155d expected: 200 got: 730428 in: 13ms
> 
> and my D results (code: http://dpaste.com/3QFXACY ):
> snip.AtomicCounter: got: 100 expected: 100 in 77 ms and 783 μs
> snip.ThreadSafe1Counter: got: 100 expected: 100 in 287 ms, 727
> μs, and 3 hnsecs
> snip.ThreadSafe2Counter: got: 100 expected: 100 in 281 ms, 117
> μs, and 1 hnsec
> snip.ThreadSafe3Counter: got: 100 expected: 100 in 158 ms, 480
> μs, and 2 hnsecs
> snip.ThreadUnsafeCounter: got: 100 expected: 100 in 6 ms, 682
> μs, and 1 hnsec
> 
> so atomic is same as in Java pthread_mutex is same speed as java
> synchronized
> D mutexes and D synchronized are almost same, I belive that if I could
> setup same attrs as in pthread version it will be around 160ms too.
> 
> Unsafe is almost same for D and java. Only java ReentrantLock seems to
> work better. I believe there is some trick, so it will end up not using
> mutexes in the end at all. For example consider this change in D code:
> 
> void doIt(alias counter)() {
>   auto thg = new ThreadGroup();
>   for (int i=0; i<NR_OF_THREADS; ++i) {
>  thg.create(!(counter));
>   }
>   thg.joinAll();
> }
> 
> change it to
> 
> void doIt(alias counter)() {
>   auto thg = new ThreadGroup();
>   for (int i=0; i<NR_OF_THREADS; ++i) {
> auto tc = thg.create(!(counter));
> tc.join();
>   }
> }
> 
> and results are:
> 
> snip.AtomicCounter: got: 100 expected: 100 in 22 ms, 251 μs, and
> 6 hnsecs
> snip.ThreadSafe1Counter: got: 100 expected: 100 in 46 ms, 146
> μs, and 3 hnsecs
> snip.ThreadSafe2Counter: got: 100 expected: 100 in 44 ms, 961
> μs, and 5 hnsecs
> snip.ThreadSafe3Counter: got: 100 expected: 100 in 42 ms, 512
> μs, and 8 hnsecs
> snip.ThreadUnsafeCounter: got: 100 expected: 100 in 2 ms, 108
> μs, and 5 hnsecs
> 
> 
> 
> 
> 
thank you for looking into it.
this seems to be quite good.
I did expect something in those lines, but got the mentioned numbers on
my os x macbook. perhaps its a os x glitch.



Re: Speed of synchronized

2016-10-17 Thread Christian Köstlin via Digitalmars-d-learn
On 17/10/16 06:55, Daniel Kozak via Digitalmars-d-learn wrote:
> Dne 16.10.2016 v 10:41 Christian Köstlin via Digitalmars-d-learn napsal(a):
> 
>> My question now is, why is each mutex based thread safe variant so slow
>> compared to a similar java program? The only hint could be something
>> like:
>> https://blogs.oracle.com/dave/entry/java_util_concurrent_reentrantlock_vs 
>> that
>> mentions, that there is some magic going on underneath.
>> For the atomic and the non thread safe variant, the d solution seems to
>> be twice as fast as my java program, for the locked variant, the java
>> program seems to be 40 times faster?
>>
>> btw. I run the code with dub run --build=release
>>
>> Thanks in advance,
>> Christian
> Can you post your timings (both D and Java)?  And can you post your java
> code?
Hi,

thanks for asking. I attached my java and d sources.
Both try to do more or less the same thing. They spawn 100 threads, that
call increment on a counter object 1 times. The implementation of
the counter object is exchanged, between a obviously broken thread
unsafe implementation, some with atomic operations, some with
mutex-implementations.

to run java call ./gradlew clean build
->
counter.AtomicIntCounter@25992ae3 expected: 200 got: 100 in: 22ms
counter.AtomicLongCounter@2539f946 expected: 200 got: 100 in: 17ms
counter.ThreadSafe2Counter@527d56c2 expected: 200 got: 100 in: 33ms
counter.ThreadSafe1Counter@6fd8b1a expected: 200 got: 100 in: 173ms
counter.ThreadUnsafeCounter@6bb33878 expected: 200 got: 562858 in: 10ms

obviously the unsafe implementation is fastest, followed by atomics.
the vrsion with reentrant locks performs very well, wheras the
implementation with synchronized is the slowest.

to run d call dub test (please mark, that the dub test build is
configured like this:
buildType "unittest" {
  buildOptions "releaseMode" "optimize" "inline" "unittests" "debugInfo"
}
, it should be release code speed and quality).

->
app.AtomicCounter: got: 100 expected: 100 in 23 ms, 852 μs, and
6 hnsecs
app.ThreadSafe1Counter: got: 100 expected: 100 in 3 secs, 673
ms, 232 μs, and 6 hnsecs
app.ThreadSafe2Counter: got: 100 expected: 100 in 3 secs, 684
ms, 416 μs, and 2 hnsecs
app.ThreadUnsafeCounter: got: 690073 expected: 100 in 8 ms and 540 μs
from example got: 3 secs, 806 ms, and 258 μs

here again, the unsafe implemenation is the fastest,
atomic performs in the same ballpark as java
only the thread safe variants are far off.

thanks for looking into this,
best regards,
christian




threads.tar.gz
Description: GNU Zip compressed data


Re: Speed of synchronized

2016-10-17 Thread Christian Köstlin via Digitalmars-d-learn
On 16/10/16 19:50, tcak wrote:
> On Sunday, 16 October 2016 at 08:41:26 UTC, Christian Köstlin wrote:
>> Hi,
>>
>> for an exercise I had to implement a thread safe counter. This is what
>> I came up with:
>>
>> [...]
> 
> Could you try that:
> 
> class ThreadSafe3Counter: Counter{
>   private long counter;
>   private core.sync.mutex.Mutex mtx;
> 
>   public this() shared{
>   mtx = cast(shared)( new core.sync.mutex.Mutex );
>   }
> 
>   void increment() shared {
>   (cast()mtx).lock();
>   scope(exit){ (cast()mtx).unlock(); }
> 
> core.atomic.atomicOp!"+="(this.counter, 1);
>   }
> 
>   long get() shared {
> return counter;
>   }
> }
> 
> 
> Unfortunately, there are some stupid design decisions in D about
> "shared", and some people does not want to accept them.
> 
> Example while you are using mutex, so you shouldn't be forced to use
> atomicOp there. As a programmer, you know that it will be protected
> already. That is a loss of performance in the long run.
thanks for the implementation. i think this is nicer, than using __gshared.
i think using atomic operations and mutexes at the same time, does not
make any sense. one or the other.

thanks,
Christian



Speed of synchronized

2016-10-16 Thread Christian Köstlin via Digitalmars-d-learn
Hi,

for an exercise I had to implement a thread safe counter.
This is what I came up with:

---SNIP---

import std.stdio;
import core.thread;
import std.conv;
import std.datetime;
static import core.atomic;
import core.sync.mutex;

int NR_OF_THREADS = 100;
int NR_OF_INCREMENTS = 1;

interface Counter {
  void increment() shared;
  long get() shared;
}
class ThreadUnsafeCounter : Counter {
  long counter;
  void increment() shared {
counter++;
  }
  long get() shared {
return counter;
  }
}

class ThreadSafe1Counter : Counter {
  private long counter;
  synchronized void increment() shared {
counter++;
  }
  long get() shared {
return counter;
  }
}

class ThreadSafe2Counter : Counter {
  private long counter;
  __gshared Mutex lock; //
http://forum.dlang.org/post/rzyooanimrynpmqly...@forum.dlang.org
  this() shared {
lock = new Mutex;
  }
  void increment() shared {
synchronized (lock) {
  counter++;
}
  }
  long get() shared {
return counter;
  }
}

class AtomicCounter : Counter {
  private long counter;
  void increment() shared {
core.atomic.atomicOp!"+="(this.counter, 1);
  }
  long get() shared {
return counter;
  }
}
void main() {
  void runWith(Counter)() {
shared Counter counter = new shared Counter();
void doIt() {
  Thread[] threads;
  for (int i=0; i

Re: blog.dlang.org

2016-06-21 Thread Christian Köstlin via Digitalmars-d-learn
On 22/06/16 01:51, Seb wrote:
> On Tuesday, 21 June 2016 at 23:36:41 UTC, Leandro Motta Barros wrote:
>> Try http://dlang.org/blog/
>>
>> But, indeed, I would expect blog.dlang.org to work...
>>
>> Cheers,
>>
>> LMB
>>
>> On Tue, Jun 21, 2016 at 6:47 PM, Christian Köstlin <
>> digitalmars-d-learn@puremagic.com> wrote:
>>
>>> I just wanted to have a look at the new blog post about ldc, and
>>> entered blog.dlang.org without thinking into the browser.
>>>
>>> This does not lead to the official blog anymore, but to the old
>>> digitalmars website.
> 
> Good catch - reported: https://github.com/dlang/D-Blog-Theme/issues/17 ;-)
thanks!
I was sure that there is a right way of reporting this.
Next time I will put up an issue on github.

christian


blog.dlang.org

2016-06-21 Thread Christian Köstlin via Digitalmars-d-learn
I just wanted to have a look at the new blog post about ldc, and entered
blog.dlang.org without thinking into the browser.

This does not lead to the official blog anymore, but to the old
digitalmars website.



how to get rid of "cannot deduce function from argument types" elegantly

2016-06-13 Thread Christian Köstlin via Digitalmars-d-learn
I made a small (could be reduced further) example that creates and walks
a templated binary tree. The tree also gets a factory function to use
type deduction to conveniently construct a tree. Unfortunately this does
not compile, if I remove the three ugly methods between /* these should
go */ comments.

```
fibertest.d(49): Error: template fibertest.tree cannot deduce function
from argument types !()(int, typeof(null), typeof(null)), candidates are:
fibertest.d(41):fibertest.tree(T)(T node, Tree!T left, Tree!T right)
```

Is there a more elegant way to fix this?

Another fix would be to provide 4 factory functions like this (still not
really nice):
```
tree(Tree!T left, T node, Tree!T right) {...}
tree(T node, Tree!T right) {...}
tree(Tree!T left, T node) {...}
tree(T node) {...}
```

```d
import std.concurrency, std.stdio, std.range;

class Tree(T) {
  Tree!T left;
  T node;
  Tree!T right;

  this(Tree!T left, T node, Tree!T right) {
this.left = left;
this.node = node;
this.right = right;
  }

  auto inorder() {
return new Generator!T(() => yieldAll(this));
  }

  private void yieldAll(Tree!T t) {
if (t is null) return;

yieldAll(t.left);
yield(t.node);
yieldAll(t.right);
  }
}

/* these should go */

Tree!T tree(T)(typeof(null) left, T node, typeof(null) right) {
  return new Tree!T(null, node, null);
}
Tree!T tree(T)(Tree!T left, T node, typeof(null) right) {
  return new Tree!T(left, node, null);
}
Tree!T tree(T)(typeof(null) left, T node, Tree!T right) {
  return new Tree!T(null, node, right);
}

/* these should go */

Tree!T tree(T)(Tree!T left, T node, Tree!T right) {
  return new Tree!T(left, node, right);
}

void main(string[] args) {
  // /3
  //   /2
  //  1
  auto t1 = tree(tree(tree(null, 1, null), 2, null), 3, null);
  // 1
  //  \2
  //\3
  auto t2 = tree(null, 1, tree(null, 2, tree(null, 3, null)));
  writeln(t1.inorder());
  writeln(t2.inorder());
  writeln(t1.inorder().array == t2.inorder().array);
}
```


Re: __traits and string mixins

2015-12-04 Thread Christian Köstlin via Digitalmars-d-learn

On 04/12/15 21:49, Nicholas Wilson wrote:

On Thursday, 3 December 2015 at 13:36:16 UTC, Christian Köstlin wrote:

Hi,

I started an experiment with the informations that are available for
compile time reflection.

[...]


I think CyberShadow (aka Vladimir Panteleev) has done something similar
to this
http://blog.thecybershadow.net/2014/08/05/ae-utils-funopt/

Thanks for the pointer!
funopt really does look nice and it looks also finished!

regards,
christian



__traits and string mixins

2015-12-03 Thread Christian Köstlin via Digitalmars-d-learn

Hi,

I started an experiment with the informations that are available for 
compile time reflection.


What I wanted to create is a thor like cli parser library, that forces 
you to encapsulate your programs into subclasses of Dli. The commands
and options, that are understood by the generated cli are taken from the 
members, that have to be enhanced with UCA's. The __traits facility is 
used to generate from those a string mixin, that also has to be added to 
your class. A sample usage would look like this:


class Git : Dli {
  @Option string verbose;

  @Task void helloWorld(string option1, string option2) {
...
your code here
...
  }
  mixin(createDli!(Git));
}

this program should be able to understand:

./git --verbose helloWorld --option1=pretty --option2=whatever

When I created this stuff the __traits stuff got me thinking, because I 
iterate e.g. allMembers of a class, but the result of this code is mixed 
in again into the class (and creates new members on the way).


So how exactly is this done compiler wise?

Another question that came up for me is: would it not be awesome, if the 
whole AST would be available in traits? The facilities as they are, are 
already a little bit more powerful than e.g. java reflection, because 
you can e.g. get to the name of parameters of method.


thanks for your input,

christian

p.s.: if you are interested in a very rough first version of the code or 
if you have enhancements for it, please have a look at: 
https://github.com/gizmomogwai/dli