Re: Which application is much suited and which is not.

2016-04-16 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 16 April 2016 at 14:08:05 UTC, newB wrote:
Let's say you have decided to use D programming language. For 
what kind of applications would you choose D programming 
language and For what kind of applications you won't choose D 
programming.


I would use D for web programming and for desktop applications, 
system programs. Though D website says, D can be used for 
scripting, I would never choose D for scripting. Programs that 
relies on a lot of system programs or that should call other 
shell scripts, I would write in bash.


Re: I can has @nogc and throw Exceptions?

2016-07-13 Thread Eugene Wissner via Digitalmars-d-learn
I'm writing currently a library, that is 100% @nogc but not 
nothrow, and I slowly begin to believe that I should publish it 
already, though it isn't ready yet. At least as example.
std.experimental.allocator doesn't work nicely with @nogc. for 
example dispose calls destroy, that isn't @nogc.
I wrote a primitive native allocator for linux and some help 
functions, that replaces phobos functions till they aren't 
@nogc-ready. For example for throwing the exceptions:


void raise(T : Throwable, A...)(Allocator allocator, auto ref A 
args)

{
auto e = make!T(allocator, args);
throw e;
}


and you can throw then with raise!Exception("bla-bla")


On Wednesday, 13 July 2016 at 16:13:21 UTC, Adam Sansier wrote:
On Wednesday, 13 July 2016 at 11:39:11 UTC, Lodovico Giaretta 
wrote:

On Wednesday, 13 July 2016 at 00:57:38 UTC, Adam Sansier wrote:

[...]


You shall use a static per-thread Region allocator[1] backed 
by Mallocator[2].

Then you just make[3] exceptions inside it and throw them.
So you can allocate and chain exceptions until you end the 
memory established on creation.
Whenever you don't need the exception chain anymore (i.e.: you 
catched them and program is back in "normal" mode, you just 
reset the region allocator, so you have all of your memory 
again, for the next exception chain).


[1] 
https://dlang.org/phobos/std_experimental_allocator_building_blocks_region.html
[2] 
https://dlang.org/phobos/std_experimental_allocator_mallocator.html

[3] https://dlang.org/phobos/std_experimental_allocator.html


Am I going to have to do all this myself or is it already done 
for me somewhere?





Re: Asynchronous Programming and Eventhandling in D

2016-07-05 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 5 July 2016 at 08:24:43 UTC, O/N/S wrote:

Hi ("Grüss Gott")

I like the asynchronous events in Javascript.
Is something similar possible in D?

Found Dragos Carp's asynchronous library 
(https://github.com/dcarp/asynchronous).
Are there any more integrated (in Phobos/in D) ways to work 
asynchronously?


An example: One server ask a second server to calculate 
something big.
The first server continues with his work, until the answer come 
back from the second.

And so on...

Using threads or fibers would be a way, but has not the same 
elegancy like the Javascript way. (To avoid discussions: D is 
better ;-)



Greetings from Munich,
Ozan

Servus,

I'm currently rewriting the base skeleton of libev in D (only for 
linux for now) for web development aswell. And the next step 
would be data structures, basic server, futures and yo on...
I was working with dcarp's asynchronous and i found it very very 
good. It is till now the best I've seen in D for async 
programming ( I mean its design and usability).


Can you describe what would you like to see more concretly. I 
know js but how is it supposed to work for D? Maybe you can give 
some example, kind of pseudo code? It would help me much to build 
a concept and maybe we will see someday something usable in this 
area :)




Re: Asynchronous Programming and Eventhandling in D

2016-07-06 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 6 July 2016 at 08:39:03 UTC, chmike wrote:

On Tuesday, 5 July 2016 at 20:38:53 UTC, Eugene Wissner wrote:

On Tuesday, 5 July 2016 at 08:24:43 UTC, O/N/S wrote:

Hi ("Grüss Gott")

I like the asynchronous events in Javascript.
Is something similar possible in D?

Found Dragos Carp's asynchronous library 
(https://github.com/dcarp/asynchronous).
Are there any more integrated (in Phobos/in D) ways to work 
asynchronously?


An example: One server ask a second server to calculate 
something big.
The first server continues with his work, until the answer 
come back from the second.

And so on...

Using threads or fibers would be a way, but has not the same 
elegancy like the Javascript way. (To avoid discussions: D is 
better ;-)



Greetings from Munich,
Ozan

Servus,

I'm currently rewriting the base skeleton of libev in D (only 
for linux for now) for web development aswell. And the next 
step would be data structures, basic server, futures and yo 
on...
I was working with dcarp's asynchronous and i found it very 
very good. It is till now the best I've seen in D for async 
programming ( I mean its design and usability).


Can you describe what would you like to see more concretly. I 
know js but how is it supposed to work for D? Maybe you can 
give some example, kind of pseudo code? It would help me much 
to build a concept and maybe we will see someday something 
usable in this area :)


The problem I see with libev is that it isn't compatible with 
the CPIO API of windows.
The C++ boost asio library, on the contrary, is compatible with 
the select/epoll/kqueue model and the Windows CPIO model.


This is the reason I started on a D implementation of asio I 
called dasio [https://github.com/chmike/dasio]. Unfortunately 
my alimentary work didn't leave me any time to make progress on 
it. The only thing I manage to implement so far is asio's error 
code system.


I'm glad to see other people see an interest to work on this. 
We definitely should find a way to combine our efforts. That is 
already a significant work made with the other libraries.


My feeling is that providing support for very efficient IO in 
Phobos might have a strong impact on D's visibility and 
adoption for backend applications (e.g. servers). Performance 
is a very strong argument for adoption is such context.


A list of requirements has been already published on the wiki.

What I think is now missing is a benchmarking tool so that we 
can get numbers for each async lib implementation that we can 
also compare with a raw C implementation using the native 
functions.


The only reason libev was choosen is that it is the simplest 
implementation I know about. A few C files. I had an educational 
purpose: I wanted to see how an event loop works on low level. 
Asyncio was for me no-go, since I've not written a lot in C++, 
and can only read the code a bit. So I'm not hanging on libev. 
The only another implementation would be the python event 
library. It is also pure C code and it was much more cleaner 
written than libev on the first sight.


Now there are two problems with my work:
1) The first is something we all are tired to talk about: manual 
memory management. I make a proof of concept and am writing the 
code that is 100% marked as @nogc. It has side effects. For 
example I allocate exceptions and thay should be freed after 
catching - it is something, phobos doesn't do. As said it is an 
experiment; I would like to see how it works.


2) Performance. The main reason I started the writing at all is 
that the existing implementations seem to have only performance 
as criterium. Performance is super important, but not on the cost 
of design, usability and extensibility. For example in vibe.d 
(and libasync) everything possible is defined as structs, 
everything that would be interesting to extend is final; and 
after it you go to phobos and see a "workaround". For example 
Mallocator is a struct but you have to be sure, it is an 
allocator. How do you force the right interface?


static if (hasMember!(Allocator, "deallocate"))
{
   return impl.deallocate(b);
}
else
{
   return false;
}

Somethinkg like this would be ok for C. But for a language with 
interfaces, it is ugly design independent of how it performs. 
Everything I say here is IMHO.


Except these two points I'm interested in some kind of collective 
work aswell. It is very difficult as one man job. I didn't know 
other people are also working on similar implementations. Nice to 
know.


Are you aware of any benchmark tools in other languages that 
could be used?


Re: Asynchronous Programming and Eventhandling in D

2016-07-06 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 6 July 2016 at 14:57:08 UTC, chmike wrote:

On Wednesday, 6 July 2016 at 11:33:53 UTC, Eugene Wissner wrote:


The only reason libev was choosen is that it is the simplest 
implementation I know about. A few C files. I had an 
educational purpose: I wanted to see how an event loop works 
on low level. Asyncio was for me no-go, since I've not written 
a lot in C++, and can only read the code a bit. So I'm not 
hanging on libev. The only another implementation would be the 
python event library. It is also pure C code and it was much 
more cleaner written than libev on the first sight.


Your project and work is valuable on many aspects. I didn't 
mean to depreciate it.



Now there are two problems with my work:
1) The first is something we all are tired to talk about: 
manual memory management. I make a proof of concept and am 
writing the code that is 100% marked as @nogc. It has side 
effects. For example I allocate exceptions and thay should be 
freed after catching - it is something, phobos doesn't do. As 
said it is an experiment; I would like to see how it works.


2) Performance. The main reason I started the writing at all 
is that the existing implementations seem to have only 
performance as criterium. Performance is super important, but 
not on the cost of design, usability and extensibility. For 
example in vibe.d (and libasync) everything possible is 
defined as structs, everything that would be interesting to 
extend is final; and after it you go to phobos and see a 
"workaround". For example Mallocator is a struct but you have 
to be sure, it is an allocator. How do you force the right 
interface?


That is a valid point. I know it is hard to get the best 
performance and optimal API design at the same time. The reason 
the methods are final is to avoid the overhead of virtual 
method indirection.



static if (hasMember!(Allocator, "deallocate"))
{
   return impl.deallocate(b);
}
else
{
   return false;
}

Somethinkg like this would be ok for C. But for a language 
with interfaces, it is ugly design independent of how it 
performs. Everything I say here is IMHO.


This would mean we need a new design pattern that supports both.

Except these two points I'm interested in some kind of 
collective work aswell. It is very difficult as one man job. I 
didn't know other people are also working on similar 
implementations. Nice to know.


Are you aware of any benchmark tools in other languages that 
could be used?


The benchmark tools available are mainly testing web servers. 
And the exact operation pattern is not very clear. One of such 
benchmark tool is wrk [https://github.com/wg/wrk] that measure 
HTTP request speed. The problem is that it measure the 
performance of the I/O and HTTP handling.


Here are some benchmark results:

* https://www.techempower.com/benchmarks/
* https://github.com/nanoant/WebFrameworkBenchmark

How can D be worse than Java ? 

My strategy would be to split the problem. First get the async 
I/O optimal. Then get HTTP handling optimal, and finally get 
database interaction optimal (the optimal async I/O should 
help). An investigation on the methods used by Java/undertow to 
get these performances could help.



I would suggest to implement a benchmark client doing some 
predefined patterns of I/O operations similar to web 
interactions or the most common types of interactions. And a 
server implemented in C using native system I/O operations. 
This server implementation would then be our reference.


What would be measured is how much slower our different D 
implementations are relative to the C reference implementation. 
This will allow us to have a benchmarking test that doesn't 
depend that much of the hardware.


Yes. it would be a way to go.

As I see Kore performs pretty well. One could write bindings 
first and then begin to rewrite step for step. On every 
development step you have a working http server and you can use 
these http-benchmark tools for the benchmarking.


I have to look in it in the next weeks


typeof.stringof wrong type

2016-08-17 Thread Eugene Wissner via Digitalmars-d-learn
I have a problem, that .stringof doesn't return what I'm 
expecting. Consider the following:


template A(string T)
{
enum A : bool
{
yes = true,
}
}

void main()
{
A!"asdf" a1;
typeof(a1) a2;
mixin(typeof(a1).stringof ~ " a3;");
}

I get an error: some.d-mixin-13|13 error| Error: template 
some.A(string T) is used as a type


Why the second line in main() works but the third one not? 
typeof(a1).stringof seems to ignore the string template parameter 
T.

pragma(msg, typeof(a1).stringof) would  return just "A".

Is it a bug?


Re: typeof.stringof wrong type

2016-08-17 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 17 August 2016 at 12:39:18 UTC, ag0aep6g wrote:

On 08/17/2016 02:08 PM, Eugene Wissner wrote:
I have a problem, that .stringof doesn't return what I'm 
expecting.

Consider the following:

template A(string T)
{
enum A : bool
{
yes = true,
}
}

void main()
{
A!"asdf" a1;
typeof(a1) a2;
mixin(typeof(a1).stringof ~ " a3;");
}

I get an error: some.d-mixin-13|13 error| Error: template 
some.A(string

T) is used as a type

Why the second line in main() works but the third one not?
typeof(a1).stringof seems to ignore the string template 
parameter T.

pragma(msg, typeof(a1).stringof) would  return just "A".

Is it a bug?


Not exactly a bug. .stringof gives you a simple, readable name. 
It's not meant to be used in code generation. You can use 
std.traits.fullyQualifiedName instead:


import std.traits: fullyQualifiedName;
mixin(fullyQualifiedName!(typeof(a1)) ~ " a3;");


What I find strange is that if A isn't a enum, but a class the 
.stringof returns the full type name, therefore I would expect it 
behave the same in the code above.
I will test later fullyQualifiedName, the example above is very 
simplified version of the code I had problem with. Thanks anyway


Re: trick to make throwing method @nogc

2017-02-25 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 25 February 2017 at 20:02:56 UTC, ikod wrote:

On Saturday, 25 February 2017 at 19:59:29 UTC, ikod wrote:

Hello,

I have a method for range:

struct Range {
immutable(ubyte[]) _buffer;
size_t _pos;

@property void popFront() pure @safe {
enforce(_pos < _buffer.length, "popFront from empty 
buffer");

_pos++;
}
}

I'd like to have @nogc here, but I can't because enforce() is 
non-@nogc.
I have a trick but not sure if it is valid, especially I don't 
know if optimization will preserve code, used for throwing:


import std.string;

struct Range {
immutable(ubyte[]) _buffer;
size_t  _pos;

this(immutable(ubyte[]) s) {
_buffer = s;
}
@property void popFront() pure @safe @nogc {
if (_pos >= _buffer.length ) {
auto _ = _buffer[$]; // throws RangeError
}
_pos++;
}
}

void main() {
auto r = Range("1".representation);
r.popFront();
r.popFront(); // throws
}

Is it ok to use it? Is there any better solution?

Thanks!


Found that I can use

@property void popFront() pure @safe @nogc {
if (_pos >= _buffer.length ) {
assert(0, "popFront for empty range");
}
_pos++;
}

which is both descriptive and can't be optimized out.


I made a test:

void main()
{
  assert(0);
}

it builds and doesn't throw if I compile with:
dmd -release

though it causes a segfault, what is probably a dmd bug. So I 
suppose it can be optimized out. And it isn't very discriptive, 
you probably throws the same AssertError for all errors. You can 
throw in @nogc code, you only have to allocate the exception not 
on the GC heap and free it after catching. I'm writing myself a 
library that is complete @nogc and I use exceptions this way:

- Allocate the exception
- throw
- catch
- free

A wrapper that unifies these 4 steps like enforce is pretty easy 
to implement.


Re: trick to make throwing method @nogc

2017-02-25 Thread Eugene Wissner via Digitalmars-d-learn
On Saturday, 25 February 2017 at 20:49:51 UTC, Adam D. Ruppe 
wrote:
On Saturday, 25 February 2017 at 20:40:26 UTC, Eugene Wissner 
wrote:

it builds and doesn't throw if I compile with:
dmd -release
though it causes a segfault, what is probably a dmd bug.


No, that's by design. assert(0) compiles to a segfault 
instruction with -release.


A wrapper that unifies these 4 steps like enforce is pretty 
easy to implement.


yeah easy to use exception in @nogc as long as the catch knows 
to free it too.


But anyway segfault is not very descriptive :)


Re: typeof.stringof wrong type

2016-08-19 Thread Eugene Wissner via Digitalmars-d-learn

fullyQualifiedName doesn't work with BitFlags for example:

import std.stdio;
import std.typecons;
import std.traits;

enum Stuff
{
asdf
}

void main()
{
BitFlags!Stuff a;

typeof(a) b;
mixin(fullyQualifiedName!(typeof(a)) ~ " c;");
mixin(typeof(a).stringof ~ " d;");
}


Both mixins fail. fullyQualifiedName!(typeof(a)) becomes:
std.typecons.BitFlags!(test.Stuff, cast(Flag)false)

"cast(Flag)false" should be "cast(Flag!"unsafe")false". So string 
template parameter "unsafe" is missing. The same problem as I 
described before ("Flag" is an enum template like in my first 
example).




dmd 2.072.0 beta 2 no size because of forward reference

2016-10-21 Thread Eugene Wissner via Digitalmars-d-learn

Hey,

the code bellow compiles with dmd 2.071.2, but doesn't compile 
with the same command with dmd 2.072.0 beta2. Maybe someone knows 
what's going wrong or whether it is a bug in 2.071.2/2.072.0 (it 
is a reduced part from memutils):


app.d:
import memutils.utils;

struct HashMap(Key, Value)
{
int[] m_table; // NOTE: capacity is always POT

~this()
{
freeArray!(int)(m_table);
}
}

--
module memutils.allocators;

final class FreeListAlloc()
{
import memutils.utils : MallocAllocator;
}

--
module memutils.utils;

import memutils.allocators;

final class MallocAllocator
{
}

final class AutoFreeListAllocator()
{
FreeListAlloc!()[12] m_freeLists;
}

alias LocklessAllocator = AutoFreeListAllocator!();

R getAllocator(R)() {
return new R;
}

void freeArray(T)(auto ref T[] array)
{
	auto allocator = getAllocator!(LocklessAllocator); // freeing. 
Avoid allocating in a dtor

}



The command to compile:
dmd -c -Imemutils/ app.d -of/dev/null


Builds with the latest stable. Fails with the beta:

memutils/utils.d(9): Error: class 
memutils.utils.AutoFreeListAllocator!().AutoFreeListAllocator no 
size because of forward reference
memutils/utils.d(14): Error: template instance 
memutils.utils.AutoFreeListAllocator!() error instantiating
memutils/utils.d(11): Error: template instance 
memutils.allocators.FreeListAlloc!() error instantiating
memutils/utils.d(14):instantiated from here: 
AutoFreeListAllocator!()
app.d(9): Error: template instance memutils.utils.freeArray!int 
error instantiating

app.d(13):instantiated from here: HashMap!(int, uint)



Compiles with the lates beta if:
dmd -c -Imemutils/ memutils/* app.d -of/dev/null


Re: Primality test function doesn't work on large numbers?

2017-01-10 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 10 January 2017 at 03:02:40 UTC, Elronnd wrote:
Thank you!  Would you mind telling me what you changed aside 
from pow() and powm()?  diff isn't giving me readable results, 
since there was some other stuff I trimmed out of the original 
file.  Also, while this is a *lot* better, I still get some lag 
generating 1024-bit primes and I can't generate larger primes 
in a reasonable amount of time.  Maybe my genbigint() function 
is to blame?  It isn't efficient:


bigint genbigint(int numbits) {
bigint tmp;
while (numbits --> 0) {
tmp <<= 1;
tmp += uniform(0, 2);
}
return tmp;
}


Yes. You would normally get some random string/value only once 
and then apply md5 or better sha512 on it (several times if you 
want to have a more secur hash) to get the right length and then 
get the hex digest and load it into bigint.


Re: Primality test function doesn't work on large numbers?

2017-01-08 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 8 January 2017 at 07:52:33 UTC, Elronnd wrote:
I'm working on writing an RSA implementation, but I've run into 
a roadblock generating primes.  With a more than 9 bits, my 
program either hangs for a long time (utilizing %100 CPU!) or 
returns a composite number.  With 9 or fewer bits, I get 
primes, but I have to run with a huge number of iterations to 
actually _get_ a random number.  It runs fast, though.  Why 
might this be?  Code: http://lpaste.net/1034777940820230144


I haven't read your code very exactly, but I have an assumption 
and you can check if it is helpful:)


I think your actual problem is this line:

z = pow(b, m) % integer;

If it does what I think, it can be horribly slow and memory 
consuming. You have to implement your own pow function that does 
a ^ b mod c. Look into python source code or in "tanya" (D): 
https://github.com/caraus-ecms/tanya/blob/master/source/tanya/math/package.d. It is the same algorithm that phobos uses but with modulo operation built-in and a bit differently written (my code is based neither on python nor on phobos and uses a different bigint implementation). You can also rewrite pow(z, 2) % integer then. It will be faster.

Try to reduce bigint copying and arithmetic anyway if possible.


Re: Is it possbile to specify a remote git repo as dub dependency?

2016-12-19 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 19 December 2016 at 14:45:07 UTC, biocyberman wrote:
On Monday, 19 December 2016 at 14:18:17 UTC, Jacob Carlborg 
wrote:

On 2016-12-19 13:11, biocyberman wrote:
I can write a short script to clone the remote git repo and 
use it as a
submodule. But if it is possible to do with dub, it will be 
more

convenient.


It's not currently possible.


I see, it is both a good thing and a bad thing. Good thing is 
to encourage developers to submit packages to central dub 
registry. Bad thing is, when that does not happen soon enough, 
other developers who use the package will have to do something 
for themselves.


To be honest there is no need for dub registry at all. Vim plugin 
managers can pull plugins from every github repository. 
Javascript ended up with 10 dependency managers every of which 
has its own registry (and npm as the most official of them). And 
the js case will happen to D aswell if it become more popular.


Re: BitArray Slicing

2016-12-21 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 21 December 2016 at 09:08:51 UTC, Ezneh wrote:

Hi, in one of my projects I have to get a slice from a BitArray.

I am trying to achieve that like this :

void foo(BitArray ba)
{
   auto slice = ba[0..3]; // Assuming it has more than 4 
elements

}

The problem is that I get an error :

"no operator [] overload for type BitArray".

Is there any other way to get a slice from a BitArray ?

Thanks,
Ezneh.


The problem is BitArray keeps multiple elements in one byte. You 
can't return just three bits but in the best case one byte with 8 
elements.


What could be done some internal range could be returned that 
gives access to the bits. But it isn't implemented.


Re: returning struct, destructor

2016-12-21 Thread Eugene Wissner via Digitalmars-d-learn
On Wednesday, 21 December 2016 at 12:32:51 UTC, Nicholas Wilson 
wrote:
On Wednesday, 21 December 2016 at 11:45:18 UTC, Eugene Wissner 
wrote:
Consider we have a function that returns a struct. So for 
example:


import std.stdio;

struct A {
~this() {
writeln("Destruct");
}
}

A myFunc() {
auto a = A(), b = A();
if (false) {
return a;
}
return b;
}

void main() {
myFunc();
}

This prints 3 times "Destruct" with dmd 0.072.1. If I remove 
the if block, it prints "Destruct" only 2 times - the behavior 
I'm expecting. Why?

Thx


Structs are value types, so unless they you pass them by 
pointer/reference, they get copied.


in myFunc it prints "Destruct" twice, one for a and once for b. 
In main it prints it one more for the (discarded) A returned 
from myfunc.


Why if the "if block" is removed, the code prints "Destruct" only 
two times. One because "a" goes out of scope and one in the main 
function. I don't understand, why "if (false) ..." changes the 
behavior


returning struct, destructor

2016-12-21 Thread Eugene Wissner via Digitalmars-d-learn

Consider we have a function that returns a struct. So for example:

import std.stdio;

struct A {
~this() {
writeln("Destruct");
}
}

A myFunc() {
auto a = A(), b = A();
if (false) {
return a;
}
return b;
}

void main() {
myFunc();
}

This prints 3 times "Destruct" with dmd 0.072.1. If I remove the 
if block, it prints "Destruct" only 2 times - the behavior I'm 
expecting. Why?

Thx



Re: returning struct, destructor

2016-12-21 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 21 December 2016 at 17:49:22 UTC, kinke wrote:
Basic stuff such as this is appropriately tested. The named 
return value optimization is enforced by D (incl. unoptimized 
builds), so behavior doesn't change by this optimization. It's 
you who changed the behavior by removing the if. Due to the 
`if`, the compiler doesn't know whether it should construct `a` 
or `b` directly into the memory (sret pointee) provided by the 
caller. When omitting the `if`, it's clear that `b` is returned 
in all cases, so the compiles constructs `a` on the local stack 
(and destructs it before exiting the function), but emplaces 
`b` into `*sret` (i.e., the caller's stack) and can thus elide 
1x postblit + 1x dtor.


Thanks a lot. It makes sense. It seemed just weired that a 
conditional return value causes such a change. But I begin to 
understand the background.


Re: returning struct, destructor

2016-12-21 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 21 December 2016 at 14:15:06 UTC, John C wrote:
On Wednesday, 21 December 2016 at 11:45:18 UTC, Eugene Wissner 
wrote:
This prints 3 times "Destruct" with dmd 0.072.1. If I remove 
the if block, it prints "Destruct" only 2 times - the behavior 
I'm expecting. Why?


Possibly to do with named return value optimisation.


Isn't an optimization that changes the behavior bad? I had a 
crash in the code where the destructor did something meaningfull, 
freed the memory (the same pointer) twice.


Re: Best memory management D idioms

2017-03-07 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 7 March 2017 at 20:15:37 UTC, XavierAP wrote:

On Tuesday, 7 March 2017 at 18:21:43 UTC, Eugene Wissner wrote:
To avoid this from the beginning, it may be better to use 
allocators. You can use "make" and "dispose" from 
std.experimental.allocator the same way as New/Delete.


Thanks! looking into it.

Does std.experimental.allocator have a leak debugging tool like 
dlib's printMemoryLog()?


Yes, but printMemoryLog is anyway useful only for simple 
searching for memory leaks. For the advanced debugging it is 
anyway better to learn some memory debugger or profiler.


Re: Best memory management D idioms

2017-03-06 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 5 March 2017 at 20:54:06 UTC, XavierAP wrote:
I was going to name this thread "SEX!!" but then I thought 
"best memory management" would get me more reads ;) Anyway now 
that I have your attention...
What I want to learn (not debate) is the currently available 
types, idioms etc. whenever one wants deterministic memory 
management. Please do not derail it debating how often 
deterministic should be preferred to GC or not. Just, whenever 
one should happen to require it, what are the available means? 
And how do they compare in your daily use, if any? If you want 
to post your code samples using special types for manual memory 
management, that would be great.


AFAIK (from here on please correct me wherever I'm wrong) the 
original D design was, if you don't want to use GC, then 
malloc() and free() are available from std.c. Pretty solid. I 
guess the downside is less nice syntax than new/delete, and 
having to check the returned value instead of exceptions. I 
guess these were the original reasons why C++ introduced 
new/delete but I've never been sure.


Then from this nice summary [1] I've learned about the 
existence of new libraries and Phobos modules: std.typecons, 
Dlib, and std.experimental.allocator. Sadly in this department 
D starts to look a bit like C++ in that there are too many 
possible ways to do one certain thing, and what's worse none of 
them is the "standard" way, and none of them is deprecated atm 
either. I've just taken a quick look at them, and I was 
wondering how many people prefer either, and what are their 
reasons and their experience.


dlib.core.memory and dlib.memory lack documentation, but 
according to this wiki page [2] I found, dlib defines 
New/Delete substitutes without GC a-la-C++, with the nice 
addition of a "memoryDebug" version (how ironclad is this to 
debug every memory leak?)


From std.typecons what caught my eye first is scoped() and 
Unique. std.experimental.allocator sounded quite, well, 
experimental or advanced, so I stopped reading before trying to 
wrap my head around all of it. Should I take another look?


scoped() seems to work nicely for auto variables, and if I 
understood it right, not only it provides deterministic 
management, but allocates statically/in the stack, so it is 
like C++ without pointers right? Looking into the 
implementation, I just hope most of that horribly unsafe 
casting can be taken care of at compile time. The whole thing 
looks a bit obscure under the hood and in its usage: auto is 
mandatory or else allocation doesn't hold, but even reflection 
cannot tell the different at runtime between T and 
typeof(scoped!T) //eew. Unfortunately this also makes scoped() 
extremely unwieldy for member variables: their type has to be 
explicitly declared as typeof(scoped!T), and they cannot be 
initialized at the declaration. To me this looks like scoped() 
could be useful in some cases but it looks hardly recommendable 
to the same extent as the analogous C++ idiom.


Then Unique seems to be analogous to C++ unique_ptr, fair 
enough... Or are there significant differences? Your experience?


And am I right in assuming that scoped() and Unique (and 
dlib.core.memory) prevent the GC from monitoring the memory 
they manage (just like malloc?), thus also saving those few 
cycles? This I haven't seen clearly stated in the documentation.



[1] 
http://forum.dlang.org/post/stohzfatiwjzemqoj...@forum.dlang.org
[2] 
https://github.com/gecko0307/dlib/wiki/Manual-Memory-Management


The memory management in D is becoming a mess. Yes, D was 
developed with the GC in mind and the attempts to make it usable 
without GC came later. Now std has functions that allocate with 
GC, there're containers that use malloc/free directly or 
reference counting for the internal storage, and there is 
std.experimental.allocator. And it doesn't really get better. 
There is also some effort to add reference counting directly into 
the language. I really fear we will have soon signatures like 
"void myfunc() @safe @nogc @norc..".
Stuff like RefCounted or Unique are similar to C++ analogues, but 
not the same. They throw exceptions allocated with GC, factory 
methods (like Unique.create) use GC to create the object.
Also dlib's memory management is a nightmare: some stuff uses 
"new" and GC, some "New" and "Delete". Some functions allocate 
memory and returns it and you never know if it will be collected 
or you should free it, you have to look into the source code each 
time to see what the function does internally, otherwise you will 
end up with memory leaks or segmentation faults. dlib has a lot 
of outdated code that isn't easy to update.


Re: Best memory management D idioms

2017-03-07 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 7 March 2017 at 17:37:43 UTC, XavierAP wrote:

On Tuesday, 7 March 2017 at 16:51:23 UTC, Kagamin wrote:

There's nothing like that of C++.


Don't you think New/Delete from dlib.core.memory fills the 
bill? for C++ style manual dynamic memory management? It looks 
quite nice to me, being no more than a simple malloc wrapper 
with constructor/destructor calling and type safety. Plus 
printMemoryLog() for debugging, much easier than valgrind.


do you want to manage non-memory resources with these memory 
management mechanisms too?


I wasn't thinking about this now, but I'm sure the need will 
come up.


Yes. For simple memory management New/Delete would be enough. But 
you depend on your libc in this case, that is mostly not a 
problem. From experience it wasn't enough for some code bases, so 
the C-world invented some work arounds:


1) Link to an another libc providing a different malloc/free 
implementations
2) Use macros that default to the libc's malloc/free, but can be 
set at compile time to an alternative implementation (mbedtls 
uses for example mbedtls_malloc, mbedtls_calloc and mbedtls_free 
macros)


To avoid this from the beginning, it may be better to use 
allocators. You can use "make" and "dispose" from 
std.experimental.allocator the same way as New/Delete.


I tried to introduce the allocators in dlib but it failed, 
because dlib is difficult to modify because of other projects 
based on it (although to be honest it was mostly a communication 
problem as it often happens), so I started a similar lib from 
scratch.


Re: Question on SSE intrinsics

2017-07-29 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 29 July 2017 at 16:01:07 UTC, piotrekg2 wrote:

Hi,
I'm trying to port some of my c++ code which uses sse2 
instructions into D. The code calls the following intrinsics:


- _mm256_loadu_si256
- _mm256_movemask_epi8

Do they have any equivalent intrinsics in D?

I'm compiling my c++ code using gcc.

Thanks,
Piotr


https://stackoverflow.com/questions/14002946/explicit-simd-code-in-d

I don't think something has changed since then.


Re: D Debug101

2017-07-29 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 29 July 2017 at 14:41:32 UTC, Kagamin wrote:

People who don't use IDE, use printf debugging.


or gdb which has several GUI-frontends if needed.


Re: Base class' constructor is not implicitly inherited for immutable classes. A bug or a feature?

2017-07-20 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 19 July 2017 at 16:00:56 UTC, Piotr Mitana wrote:

Hello, I have this code:

immutable class Base
{
this() {}
}

immutable class Derived : Base {}

void main()
{
new immutable Derived();
}

I'd like class Derived to automatically inherit the default 
constructor from Base. However, this is not the case:


main.d(6): Error: class main.Derived cannot implicitly generate 
a default ctor when base class main.Base is missing a default 
ctor


Is it a bug or it should be like this?


Interesting that the same code without immutable works.


Re: Profiling after exit()

2017-07-27 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 27 July 2017 at 14:52:18 UTC, Stefan Koch wrote:

On Thursday, 27 July 2017 at 14:30:33 UTC, Eugene Wissner wrote:
I have a multi-threaded application, whose threads normally 
run forever. But I need to profile this program, so I compile 
the code with -profile, send a SIGTERM and call exit(0) from 
my signal handler to exit the program. The problem is that I 
get the profiling information only from the main thread, but 
not from the other ones.


[...]


You will need to run it single threaded.
If you want to use the builtin-profiler.


Are there profilers that work well with dmd? valgrind? OProfile?


Profiling after exit()

2017-07-27 Thread Eugene Wissner via Digitalmars-d-learn
I have a multi-threaded application, whose threads normally run 
forever. But I need to profile this program, so I compile the 
code with -profile, send a SIGTERM and call exit(0) from my 
signal handler to exit the program. The problem is that I get the 
profiling information only from the main thread, but not from the 
other ones.


Is there a way to get the profiling information from all threads 
before terminating the program? Maybe some way to finish the 
threads gracefully? or manully call "write trace.log"-function 
for a thread?


Here is a small example that demonstrates the problem:

import core.thread;
import core.stdc.stdlib;

shared bool done = false;

void run()
{
while (!done)
{
foo;
}
}

void foo()
{
new Object;
}

void main()
{
auto thread = new Thread();
thread.start;
Thread.sleep(3.seconds);

exit(0); // Replace with "done = true;" to get the expected 
behaviour.

}

There is already an issue: 
https://issues.dlang.org/show_bug.cgi?id=971
The hack was to call trace_term() in internal/trace. call_term() 
doesn't exist anymore, I tried to export the static destructor 
from druntime/src/rt/trace.d with:


extern (C) void _staticDtor449() @nogc nothrow;

(on my system) and call it manually. I get some more information 
this way, but the numbers in the profiling report are still wrong.


Re: Profiling after exit()

2017-07-28 Thread Eugene Wissner via Digitalmars-d-learn

On Friday, 28 July 2017 at 06:32:59 UTC, Jacob Carlborg wrote:

On 2017-07-27 16:30, Eugene Wissner wrote:
I have a multi-threaded application, whose threads normally 
run forever. But I need to profile this program, so I compile 
the code with -profile, send a SIGTERM and call exit(0) from 
my signal handler to exit the program. The problem is that I 
get the profiling information only from the main thread, but 
not from the other ones.


Is there a way to get the profiling information from all 
threads before terminating the program? Maybe some way to 
finish the threads gracefully? or manully call "write 
trace.log"-function for a thread?


As others have mentioned, you should in general avoid calling 
"exit" in a D program. There's a C function called "atexit" 
that allows to register a callback that is called after calling 
"exit". You could perhaps join the threads there. I don't know 
if that helps with the profiling though.


Unfortunately I can't join threads, because the program wouldn't 
exit then, the threads run forever normally. I thought maybe 
there is some way to kill a thread gracefully in linux, so it can 
write its profiling information; or another way to get profiling.

Thanks anyway.


Re: Struct Postblit Void Initialization

2017-07-30 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 30 July 2017 at 19:22:07 UTC, Jiyan wrote:

Hey,
just wanted to know whether something like this would be 
possible sowmehow:


struct S
{
int m;
int n;
this(this)
{
m = void;
n = n;
}
}

So not the whole struct is moved everytime f.e. a function is 
called, but only n has to be "filled"


this(this) is called after the struct is copied. Doing something 
in the postblit constructor is too late. The second thing is that 
the struct is copied with memcpy. What you propose would require 
2 memcpy calls to copy the first part of the struct and then the 
second part. Besides it is difficult to implement, it may reduce 
the performance of the copying since memcpy is optimized to copy 
memory chunks.


Re: Do array literals still always allocate?

2017-05-14 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 14 May 2017 at 11:45:12 UTC, ag0aep6g wrote:

On 05/14/2017 01:40 PM, Nicholas Wilson wrote:

dynamic array literals is what I meant.


I don't follow. Can you give an example in code?


void main()
{
ubyte[] arr = [ 1, 2, 3, 4, 5 ];

assert(arr == [ 1, 2, 3, 4, 5 ]);
}

Both, assignment and comparison, allocate.


Re: Read conditional function parameters during compile time using __traits

2017-06-21 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 21 June 2017 at 19:39:14 UTC, timvol wrote:
Hi! I've a simple array of bytes I received using sockets. What 
I want to do is to calculate the target length of the message. 
So, I defined a calcLength() function for each function code 
(it's the first byte in my array). My problem is that I defined 
the calcLength() function using conditions so that each 
calcLength should be called depending on the respective 
function code, see below:


module example;

private
{
size_t calcLength(ubyte ubFuncCode)() if ( ubFuncCode == 1 )
{
return 10; // More complex calculated value
}

size_t calcLength(ubyte ubFuncCode)() if ( ubFuncCode == 2 )
{
return 20; // More complex calculated value
}

size_t calcLength(ubyte ubFuncCode)() if ( ubFuncCode == 3 )
{
return 30; // More complex calculated value
}
}

size_t doCalcLength(ubyte ubFuncCode)
{
return calcLength!(ubFuncCode)();
}

int main()
{
doCalcLength(1);
return 0;
}

But... how can I execute these functions? I mean, calling 
doCalcLength(1) function says "Variable ubFuncCode cannot be 
read at compile time". So my idea is to create an array during 
compile time using traits (e.g. __traits(allMembers)) and to 
check this later during runtime. For illustration purposes 
something like this:


--> During compile time:

void function()[ubyte] calcLengthArray;

auto tr = __traits(allMembers, example);
foreach ( string s; tr )
{
calcLengthArray[__trait(get, s)] = s;
}

--> During runtime:

size_t doCalcLength(ubyte ubFuncCode)
{
auto length = 0;

if ( ubFuncCode in calcLengthArray )
{
length = calcLengthArray[ubFuncCode]!(ubFuncCode)();
}

return length;
}

I hope everyone knows what I want to do :). But... does anyone 
know how I can realize that? I don't want to use a switch/case 
structure because the calcLength() functions can be very 
complex and I've over 40 different function codes. So, I think 
the best approach is to use something similar to the one I 
described.


Let us to look at your function:

size_t calcLength(ubyte ubFuncCode)() if ( ubFuncCode == 1 )
{
return 10; // More complex calculated value
}

This function accepts only one template parameter and no other 
parameters. Template parameter should be known at compile time. 
You can't pass a value read from socket, because you can read 
from socket only at runtime. It is what the error message says.

You calls such function as follows:

calcLength!1()
calcLength!2()
and so on.

Your doCalcLength won't work for the same reason. You try to pass 
"ubFuncCode" known at runtime as a template parameter, you will 
get the same error.
You can try to instantiate all calcLength overloads and save them 
in calcLengthArray at some index just like you already do, but 
without any template parameters. The call would look something 
like:


   length = calcLengthArray[ubFuncCode]();

But it is simplier and shorter just to use a switch statement:

switch (ubFuncCode)
{
  case 1:
Do what calcLength!1() would do
break;
  case 2:
Do what calcLength!2() would do
break;
  default:
break;
}


Re: [OT] #define

2017-05-22 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 22 May 2017 at 13:11:15 UTC, Andrew Edwards wrote:
Sorry if this is a stupid question but it eludes me. In the 
following, what is THING? What is SOME_THING?


#ifndef THING
#define THING
#endif

#ifndef SOME_THING
#define SOME_THING THING *
#endif

Is this equivalent to:

alias thing = void;
alias someThing = thing*;

Thanks,
Andrew


No, it isn't. THING is empty. Some SOME_THING is "*".

Emtpy macros are used for example to inline the functions:

#ifndef MY_INLINE
#define MY_INLINE
#endif

MY_INLINE void function()
{
}

So you can choose at compile time if you want inline the function 
or not. D is here more restrictive than C, I don't know a way to 
port to D.


Re: DMD, LDC, and GDC compilers and 32/64 bit

2017-06-18 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 18 June 2017 at 16:08:36 UTC, Russel Winder wrote:
I believe DMD, LDC, and GDC all have the -m32 or -m64 option to 
determine the word size of compiled object and executable.


I also believe there are 32-bit and 64-bit builds of the three 
compilers. Or are there?


It appears at some time in the past that some of the compilers 
when compiled as 32-bit executables, could not generate 64-bit 
objects and executables as they did not understand the -m64 
option, it was not compiled in.


I am asking this as I cannot test to get experimental data, but 
I need to fix a long standing removal of a test in the SCons D 
test suite.


Is there a way to determine the bitsize of the compiler 
executable, in the test it is assumed that if the OS is 32-bit 
then so are the D compilers.


On linux "file" gives you such information about an executable.


Re: DMD, LDC, and GDC compilers and 32/64 bit

2017-06-18 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 18 June 2017 at 17:57:28 UTC, Eugene Wissner wrote:

On Sunday, 18 June 2017 at 16:08:36 UTC, Russel Winder wrote:
I believe DMD, LDC, and GDC all have the -m32 or -m64 option 
to determine the word size of compiled object and executable.


I also believe there are 32-bit and 64-bit builds of the three 
compilers. Or are there?


It appears at some time in the past that some of the compilers 
when compiled as 32-bit executables, could not generate 64-bit 
objects and executables as they did not understand the -m64 
option, it was not compiled in.


I am asking this as I cannot test to get experimental data, 
but I need to fix a long standing removal of a test in the 
SCons D test suite.


Is there a way to determine the bitsize of the compiler 
executable, in the test it is assumed that if the OS is 32-bit 
then so are the D compilers.


On linux "file" gives you such information about an executable.


Sample output on Slackware (multilib distros like ubuntu/debian 
are different):


belka[19:55]:~$ file /usr/bin/gcc-5.3.0
/usr/bin/gcc-5.3.0: ELF 64-bit LSB executable, x86-64, version 1 
(SYSV), dynamically linked, interpreter 
/lib64/ld-linux-x86-64.so.2, stripped


Re: Question on Container Array.

2017-09-18 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 18 September 2017 at 11:47:07 UTC, Vino.B wrote:

Hi All,

  Can some one explain me on the below question.

Q1: void main (Array!string args) : Why can't we use container 
array in void main?


Q2: What is the difference between the below?
insert, insertBack
stableInsert, stableInsertBack
linearInsert, stableLinearInsert, stableLinearInsert

Q3: Storing the data in a container array store's the data in 
memory which is managed by malloc/free, where as operation such 
as appending data using any of the above nor "~=" is managed by 
gc, is my understanding correct.



From,
Vino.B


Q1: I think that someone could explain it better, but basically a 
program gets its arguments as an array of C strings. So a C main 
looks like:


main(int argc, char **argv);

To make this a bit safer, D's main works with an array of strings 
instead of pointers. D's main function is called from druntime: 
https://github.com/dlang/druntime/blob/95fd6e1e395e6320284a22f5d19fa41de8e1dcbb/src/rt/dmain2.d#L301. And it wouldn't be that cool to make the druntime depend on phobos and containers. But theoretically it would be possible to make 'void main (Array!string args)' with custom dmd and druntime.


Q2: They are the same for Array. But theoretically they can be 
defined differently. "stable" in "stableInsert" just means that a 
range got from container can be used after changing the 
container. So if you get an Array range with Array[], you can 
still use this range after stableInsert. "insert" is just shorter 
than "insertBack".


Q3: "~=" uses GC only for built-in arrays. You can define your 
own "~=" for containers. "~=" for Array calls insertBack. So it 
will use malloc here.


Re: scope(exit) and destructor prioity

2017-09-18 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 18 September 2017 at 20:55:21 UTC, Sasszem wrote:

On Monday, 18 September 2017 at 20:30:20 UTC, Jerry wrote:

On Monday, 18 September 2017 at 20:26:05 UTC, Sasszem wrote:

 [...]


It's called inbetween the destructors of wherever you put the 
scope(exit).


import std.stdio;

struct De
{
~this() { writeln("De"); }
}

void main()
{
De a;
scope(exit) writeln("scope exit");
De b;
}


Output:
De
scope exit
De


If I write "auto a = new De()", then it calls the scope first, 
no matter where I place it.


If I write "auto a = new De()" I have the same behaviour. If I 
have "auto b = new De()" aswell, then yes, destructors are called 
after scope exit.
Because you allocate on the heap with new, the destructor isn't 
called at the end of the scope at all. It is called later by the 
GC.


Try to put variable declarations with destructor after "scope 
exit" or destroy them manually with "destroy(a)".


See https://dlang.org/spec/statement.html#scope-guard-statement 
for order of calling destructors at the end of scope.


Re: What the hell is wrong with D?

2017-09-19 Thread Eugene Wissner via Digitalmars-d-learn
On Tuesday, 19 September 2017 at 17:40:20 UTC, EntangledQuanta 
wrote:


writeln(x + ((_win[0] == '@') ? w/2 : 0));
writeln(x + (_win[0] == '@') ? w/2 : 0);

The first returns x + w/2 and the second returns w/2!

WTF!!! This stupid bug has caused me considerable waste of 
time. Thanks Walter! I know you care so much about my time!


I assume someone is going to tell me that the compiler treats 
it as


writeln((x + (_win[0] == '@')) ? w/2 : 0);

Yeah, that is really logical! No wonder D sucks and has so many 
bugs! Always wants me to be explicit about the stuff it won't 
figure out but it implicitly does stuff that makes no sense. 
The whole point of the parenthesis is to inform the compiler 
about the expression to use. Not use everything to the left of 
?.


Thanks for wasting some of my life... Just curious about who 
will justify the behavior and what excuses they will give.


Why do you claim that a bug in your code is a compiler bug? Check 
"Operator precedence" [1]. There is really no reason why the 
current precedence is less "logical" than what you're awaiting.


And try to think about things you're writing, nobody forces you 
to use D.


[1] https://wiki.dlang.org/Operator_precedence


Re: Terminating a thread (which is blocking)

2017-08-25 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 24 August 2017 at 07:23:15 UTC, Timothy Foster wrote:
I've started a thread at the beginning of my program that waits 
for user input:


`thread = new Thread().start;`

`static void checkInput(){
foreach (line; stdin.byLineCopy) { ... }
}`

I need to stop checking for user input at some point in my 
program but I'm not sure how to kill this thread. 
`thread.yield();` called from my main thread doesn't kill it 
and I'm not sure how to send a message to the input checking 
thread to get it to terminate itself when `stdin.byLineCopy` 
just sits there and blocks until user input is received.


If you're on Linux, you can try pthread_kill.
Otherwise don't block. Define a shared boolean variable that says 
if the thread should stop. Wait for the input for some time and 
break, check the condition variable, try to read again or break 
and so on.


Re: Web servers in D

2017-08-25 Thread Eugene Wissner via Digitalmars-d-learn

On Friday, 25 August 2017 at 05:25:09 UTC, Hasen Judy wrote:
What libraries are people using to run webservers other than 
vibe.d?


Don't get me wrong I like the async-io aspect of vibe.d but I 
don't like the weird template language and the fact that it 
caters to mongo crowd.


I think for D to a have good web story it needs to appeal to 
serious backend developers, not hipsters who go after fads 
(mongodb is a fad, jade/haml is a fad).


I probably need to combine several libraries, but the features 
I'm looking for are:


- Spawn an HTTP server listening on a port, and routing 
requests to functions/delegates, without hiding the details of 
the http request/response objects (headers, cookies, etc).


- Support for websockets

- Runs delegates in fibers/coroutines

- Basic database connectivity (No "orm" needed; just raw sql).

- When iterating the result set of a sql query, has the ability 
to automatically map each row against a struct, and throw if 
the structure does not match.


- More generally, map any arbitrary object (such as json) to a 
struct. Something like Zewo/Reflection package for swift[0].


[0]: https://github.com/Zewo/Reflection

I feel like Vibe.d satisfies my first 3 requirements, but for 
the rest I will probably have to look for something else.


There is collie [1]. Never used. Can't say a lot about it.

arsd [2] has a lot of interesting web stuff: event loop, 
FastCGI/SimpleCGI; web-, DOM-, mail-utilities.


And the last but not least I'm running currently a small web 
server serving static files based on tanya [3]. Once I'm ready to 
write a web-framework on top of it, it would be what you mention: 
no compile-time templates, no jade-style templates, since I 
dislike these too. But unfortunately it is not something can be 
used now.


[1] https://github.com/huntlabs/collie
[2] https://github.com/adamdruppe/arsd
[3] https://github.com/caraus-ecms/tanya


Re: What does ! mean?

2017-09-27 Thread Eugene Wissner via Digitalmars-d-learn
On Wednesday, 27 September 2017 at 14:23:01 UTC, Ky-Anh Huynh 
wrote:

Hi,

I am from Ruby world where I can have `!` (or `?`) in method 
names: `!` indicates that a method would modify its object 
(`foo.upcase!` means `foo = foo.upcase`). ( I don't know if 
there is any official Ruby documentation on this convention 
though. )


In D I see `!` quite a lot. I have read the first 50 chapters 
in Ali's book but nowhere I see a note on `!`. It's about the 
compile thing, isn't it? E.g,


```
foo = formattedRead!"%s"(value);
```

But I also see `!` for some map/filter invocations. It's quite 
confusing me.


Can you please explain and give any link where I can learn more 
about these things?


Thanks a lot.


See also the following chapter in Ali's book:
http://ddili.org/ders/d.en/templates.html


Re: git workflow for D

2017-12-04 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 4 December 2017 at 20:14:15 UTC, Ali Çehreli wrote:
6) 'git push -force' so that your GitHub repo is up-to-date 
right? (There, I mentioned "force". :) )




The right option name is --force-with-lease ).


Re: Inline assembly question

2017-11-12 Thread Eugene Wissner via Digitalmars-d-learn
On Sunday, 12 November 2017 at 11:01:39 UTC, Dibyendu Majumdar 
wrote:

Hi,

I have recently started work on building a VM for Lua (actually 
a derivative of Lua) in X86-64 assembly. I am using the dynasm 
tool that is part of LuaJIT. I was wondering whether I could 
also write this in D's inline assembly perhaps, but there is 
one aspect that I am not sure how to do.


The assembly code uses static allocation of registers, but 
because of the differences in how registers are used in Win64 
versus Unix X64 - different registers are assigned depending on 
the architecture. dynasm makes this easy to do using macros; 
e.g. below.


|.if X64WIN
|.define CARG1, rcx // x64/WIN64 C call arguments.
|.define CARG2, rdx
|.define CARG3, r8
|.define CARG4, r9
|.else
|.define CARG1, rdi // x64/POSIX C call arguments.
|.define CARG2, rsi
|.define CARG3, rdx
|.define CARG4, rcx
|.endif

With above in place, the code can use the mnemonics to refer to 
the registers rather than the registers themselves. This allows 
the assembly code to be coded once for both architectures.


How would one do this in D inline assembly?

Thanks and Regards
Dibyendu


Here is an example with mixins:

version (Windows)
{
enum Reg : string
{
CARG1 = "RCX",
CARG2 = "RDX",
}
}
else
{
enum Reg : string
{
CARG1 = "RDI",
CARG2 = "RSI",
}
}

template Instruction(string I, Reg target, Reg source)
{
enum string Instruction = "asm { mov " ~ target ~ ", " ~ 
source ~ "; }";

}

void func()
{
mixin(Instruction!("mov", Reg.CARG1, Reg.CARG2));
}


Re: @nogc deduction for templated functions

2017-11-18 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 18 November 2017 at 17:28:14 UTC, David  Zhang wrote:

Hi,

Is there a way for a templated function to deduce or apply the 
@safe/@nogc attributes automaticaly? I feel like I remember dmd 
doing so at one point, but it doesn't appear to work anymore. 
In particular, I need to call a function belonging to a 
templated type, but do not know what attributes are applied.


eg.

void func(T)(T t)
{
//Don't know if safe or nogc
t.someFunc();
}

Thanks.


If you instantiate  "func" the compiler should correctly infer 
the attributes. Do you have any code where it doesn't work?


Re: Inline assembly question

2017-11-12 Thread Eugene Wissner via Digitalmars-d-learn
On Sunday, 12 November 2017 at 15:25:43 UTC, Dibyendu Majumdar 
wrote:

On Sunday, 12 November 2017 at 12:32:09 UTC, Basile B. wrote:
On Sunday, 12 November 2017 at 12:17:51 UTC, Dibyendu Majumdar 
wrote:
On Sunday, 12 November 2017 at 11:55:23 UTC, Eugene Wissner 
wrote:

[...]


Thank you - I probably could use something like this. It is 
uglier than the simpler approach in dynasm of course.


How about when I need to combine this with some struct/union 
access? In dynasm I can write:


  |  mov BASE, CI->u.l.base // BASE = 
ci->u.l.base (volatile)
  |  mov PC, CI->u.l.savedpc// PC = 
CI->u.l.savedpc


How can I mix the mixin above and combine with struct offsets?



https://dlang.org/spec/iasm.html#agregate_member_offsets

aggregate.member.offsetof[someregister]


Sorry I didn't phrase my question accurately. Presumably to use 
above with the mnemonics I would need additional mixin 
templates where the aggregate type and member etc would need to 
be parameters?


You can use just string parameters instead of enums, then you can 
pass arbitrary arguments to the instructions. The compiler will 
tell you if something is wrong with the syntax of the generated 
assembly.


Re: Class allocators

2017-11-11 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 11 November 2017 at 14:26:34 UTC, Nordlöw wrote:

Have anybody used allocators to construct class instances?


Do you mean phobos allocators? or allocators as concept?
What is the problem?


Re: Strange AV in asm mode (code only for amd64)

2017-11-05 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 5 November 2017 at 13:43:15 UTC, user1234 wrote:

Hello, try this:

---
import std.stdio;

alias Proc = size_t function();

size_t allInnOne()
{
asm pure nothrow
{
mov RAX, 1;
ret;
nop;nop;nop;nop;nop;nop;nop;
mov RAX, 2;
ret;
}
}

void main()
{
Proc proc1 = 
Proc proc2 = cast(Proc) (cast(void*)proc1 + 16);
writeln(proc1(), " ", proc2());
}
---

The call to proc1() gens a SEGFAULT after the first RET.
Remove the call to proc1() and it works.

Why that ?


One of the problems is that "naked" is missing in your assembly. 
If you write


asm pure nothrow
{
 naked;
 mov RAX, 1;
 ret;
 nop;nop;nop;nop;nop;nop;nop;
 mov RAX, 2;
 ret;
}

writeln(proc1()) works. Without "naked" dmd generates the 
prologue and epilogue for your function. Inside the assembly you 
return from the function without restoring the stack. It causes 
the segfault. So you have to write the prologue before returning 
or use nacked assembly.
With "naked" and "Proc proc2 = cast(Proc) (cast(void*)proc1 + 
8);" the example works.


Re: Any book recommendation for writing a compiler?

2017-11-04 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 1 November 2017 at 20:53:44 UTC, Dr. Assembly wrote:
Hey guys, if I were to get into dmd's source code to play a 
little bit (just for fun, no commercial use at all), which 
books/resources do you recommend to start out?


A few more resources on writing a frontend (lexer, syntactic and 
semantic analizer).


http://thinkingeek.com/gcc-tiny/
Tells how to create a GCC frontend for a Pascal-like language, 
tiny. Can be useful since you can look how it applies to a real 
dfrontend in GDC.


https://ruslanspivak.com/lsbasi-part1/
Very clear tutorial on writing a Pascal interpreter in Python. 
Very beginner friendly, but not complete yet.


http://buildyourownlisp.com/contents
It is an online book that teaches C by writing an interpreter for 
a Lisp-like language, lispy. The code can be easely translated to 
D.


If you want you can also look at some haskell books. A simple 
parser is one of the standard projects used to teach haskell.


Re: return ref this -dip1000

2017-12-11 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 11 December 2017 at 20:40:09 UTC, vit wrote:

This code doesn't compile with -dip1000:

struct Foo{
int foo;


ref int bar(){

return foo;
}
}

Error: returning `this.foo` escapes a reference to parameter 
`this`, perhaps annotate with `return`



How can be annotated this parameter with 'return ref' ?


struct Foo{
int foo;


ref int bar() return {

return foo;
}
}



Re: there's no gdc for windows?

2018-05-15 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 15 May 2018 at 14:25:31 UTC, Dr.No wrote:
Has gdc been supported for Windows? if so, where can I find it? 
I've only find Linux versions so far...


Just the same as GCC, you need mingw or cygwin to run gdc on 
windows. Unfortunately GDC doesn't provide pre-built binaries 
currently, but according to GDC bugzilla there were people who 
successfully built GDC under mingw. Have you tried to build GDC 
with mingw or cygwin?


Re: Calling convention for ASM on Linux AMD64

2018-08-18 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 18 August 2018 at 04:16:11 UTC, Sean O'Connor wrote:
What calling convention is used for assembly language in Linux 
AMD64?
Normally the parameters go in fixed order into designated 
registers.


import std.stdio;
// Linux AMD64
float* test(float *x,ulong y){
asm{
naked;
align 16;
mov RAX,RDI;
ret;
}
}

void main(){
float[] f=new float[16];
writeln([0]);
float* a=test([0],7);
writeln(a);
}

If the ulong y parameter is removed from the function 
definition the pointer x goes into RDI as expected.  When y is 
added it all goes wrong. According to AMD64 the pointer should 
stay in RDI and the ulong go into RSI.


If you compile with DMD, DMD passes the arguments in reverse 
order. LDC and GDC use normal C calling conventions.


Re: Calling convention for ASM on Linux AMD64

2018-08-18 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 18 August 2018 at 06:47:36 UTC, Eugene Wissner wrote:
On Saturday, 18 August 2018 at 04:16:11 UTC, Sean O'Connor 
wrote:
What calling convention is used for assembly language in Linux 
AMD64?
Normally the parameters go in fixed order into designated 
registers.


import std.stdio;
// Linux AMD64
float* test(float *x,ulong y){
asm{
naked;
align 16;
mov RAX,RDI;
ret;
}
}

void main(){
float[] f=new float[16];
writeln([0]);
float* a=test([0],7);
writeln(a);
}

If the ulong y parameter is removed from the function 
definition the pointer x goes into RDI as expected.  When y is 
added it all goes wrong. According to AMD64 the pointer should 
stay in RDI and the ulong go into RSI.


If you compile with DMD, DMD passes the arguments in reverse 
order. LDC and GDC use normal C calling conventions.


You can define test() as exter(C) to force dmd to use the 
expected arguments order.


Re: Allocator Part of Type

2018-03-16 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 15 March 2018 at 19:36:10 UTC, jmh530 wrote:
I recall some talk Andrei did where he said it was a bad idea 
to make the allocator part of the type.  However, the container 
library in dlang-community(says it is backed with 
std.experimental.allocator) contains allocator as part of the 
type. Automem does too. Honestly, I would think you would 
really be hobbled if you didn't. For instance, if you want to 
expand a DynamicArray using the built-in ~= operator, then you 
really need to know what the allocator is. Without ~= you'd 
have to rely on functions (maybe member functions, maybe not).


So I suppose I'm wondering why is it a bad thing to include the 
allocator as part of the type and why is it that it seems like 
in practice that's how it is done anyway.


I think it is done in D, because it was always done in C++ this 
way, so it is known to be a working solution.


I see two reasons to make the allocators part of the type
1. Virtual function calls are slow, so let us do all allocators 
to structs and then we can pass them as a part of the type 
without using stuff like IAllocator, allocatorObject etc.
2. Used allocator is actually known at compile-time, so it can be 
decided at compile-time what allocator to use.


Now there are Bloomberg Labs allocators for C++ [1] (or BDE 
allocators), that are polymorphic allocators, so the allocators 
just implement an interface. The main reasoning behind it was as 
far as I remember to reduce compile time (and Bloomberg has tons 
of C++ code), so they developed these allocators and containers 
that accept the allocator as constructor argument and save it as 
a container member.


The main problem with allocators as part of the type isn't that 
you can't compare or assign types with different allocators - 
with a bit metaprogramming it is easy. The problem for me is that 
all you code should be templated then. You can't have a function


"void myFunc(Array!int)"

because it would work only with one allocator if the allocator is 
part of the type.


So I'm using polymorphic allocators for a long time [2].

I'm surprised to see that std.experimental.allocator 
documentation includes now an example how to save "the IAllocator 
Reference For Later Use".


[1] https://github.com/bloomberg/bde/wiki/BDE-Allocator-model
[2] 
https://github.com/caraus-ecms/tanya/blob/80a177179d271b6d023f51aa3abb69376415b36e/source/tanya/container/array.d#L205


Re: dub getting stuck

2019-03-17 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 17 March 2019 at 07:20:47 UTC, Joel wrote:

macOS 10.13.6
dmd 2.085.0
dub 1.3.0

{
"name": "server",
"targetType": "executable",
"description": "A testing D application.",
"sourcePaths" : ["source"],
"dependencies":
{
"vibe-d" : "~>0.8.0"
}
}

void main()
{
import vibe.d;
listenHTTP(":8080", (req, res) {
res.writeBody("Hello, World: " ~ req.path);
});
runApplication();
}


dub -v
..
Sub package vibe-d:diet doesn't exist in vibe-d 0.8.1-alpha.1.


(gets as far as that line?!)

On another program, it gets stuck in a completely different 
situation.


dub 1.3.0 is something old. Is it reproducable with a newer 
version?


Otherwise can be related:

https://github.com/dlang/dub/issues/1345
https://github.com/dlang/dub/issues/1001


Re: Singleton in Action?

2019-02-02 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 2 February 2019 at 16:56:45 UTC, Ron Tarrant wrote:

Hi guys,

I ran into another snag this morning while trying to implement 
a singleton. I found all kinds of examples of singleton 
definitions, but nothing about how to put them into practice.


Can someone show me a code example for how one would actually 
use a singleton pattern in D? When I did the same thing in PHP, 
it took me forever to wrap my brain around it, so I'm hoping to 
get there a little faster this time.


Here's the singleton code I've been playing with:

class DSingleton
{
private:
// Cache instantiation flag in thread-local bool
// Thread local
static bool instantiated_;

// Thread global
__gshared DSingleton instance_;

this()
{

} // this()

public:

static DSingleton get()
{
if(!instantiated_)
{
synchronized(DSingleton.classinfo)
{
if(!instance_)
{
instance_ = new DSingleton();
writeln("creating");
}

instantiated_ = true;
}
}
else
{
writeln("not created");
}

return(instance_);

} // DSingleton()

} // class DSingleton

So, my big question is, do I instantiate like this:

DSingleton singleton = new DSingleton;

Or like this:

DSingleton singleton = singleton.get();

And subsequent calls would be...? The same? Using get() only?


Imho it looks fine.

For creation get() should be always used, since it is the most 
convenient way to ensure that there is really only one instance 
of the singleton. Just make this() private, so only you can 
create new instances:


private this()
{
}

And you probably don't need instantiated_, you can always check 
whether instance_ is null or not. So:


if (instance_ is null)
{
...
}
else
{
   ...
}


Re: Return Value Optimization: specification, requirements?

2019-02-02 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 2 February 2019 at 09:58:25 UTC, XavierAP wrote:
I've heard here and there that D guarantees RVO, or is even 
specified to do so...


Is it spelled out in the language specification or elsewhere? I 
haven't found it.


The D spec is often not the right place to look for the 
specification of the D language.
But yes D guarantees RVO. DMD frontend has RVO-tests and 
functions like std.algorithm.mutation.move rely on RVO and 
wouldn't work (and be possible) without RVO.




Do you know the exact requirements for RVO or NRVO to be 
possible in theory, and to be guaranteed in practice in D? Does 
it depend only on what is returned, or does it depend how it's 
constructed?




It is just plain RVO, I'm not aware of any differences for 
different types or kinds of cunstruction.


I know I can debug to find out case by case, but that's the 
kind of C++ stuff I want to avoid... I want to know the 
theory/norm/spec.


Thanks





return scope ref outlives the scope of the argument

2019-06-25 Thread Eugene Wissner via Digitalmars-d-learn

struct Container
{
}

static struct Inserter
{
private Container* container;

private this(return scope ref Container container) @trusted
{
this.container = 
}

}

auto func()()
{
Container container;
return Inserter(container);
}

void main()
{
static assert(!is(typeof(func!(;
}

The code above compiles with dmd 2.085, but not 2.086 (with 
-preview=dip1000). What am I doing wrong?


Re: return scope ref outlives the scope of the argument

2019-06-25 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 25 June 2019 at 11:16:47 UTC, Jonathan M Davis wrote:
On Tuesday, June 25, 2019 1:32:58 AM MDT Eugene Wissner via 
Digitalmars-d- learn wrote:

struct Container
{
}

static struct Inserter
{
 private Container* container;

 private this(return scope ref Container container) 
@trusted

 {
 this.container = 
 }

}

auto func()()
{
 Container container;
 return Inserter(container);
}

void main()
{
 static assert(!is(typeof(func!(;
}

The code above compiles with dmd 2.085, but not 2.086 (with
-preview=dip1000). What am I doing wrong?


You're storing a pointer to a scope variable. That's violating 
the entire point of scope. If something is scope, you can't 
store any kind of reference to it. And since container is a 
local variable in func, and Inserter tries to return from func 
with a pointer to container, you definitely have an @safety 
problem, because that pointer would be invalid once func 
returned.


- Jonathan M Davis


So you're saying that func() shouldn't compile? And it is exactly 
what the assertion in the main function does: it asserts that the 
function cannot be instantiated. And it was true for 2.085 but it 
can be instantiated with 2.086.


Re: return scope ref outlives the scope of the argument

2019-06-25 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 25 June 2019 at 12:04:27 UTC, Jonathan M Davis wrote:
On Tuesday, June 25, 2019 1:32:58 AM MDT Eugene Wissner via 
Digitalmars-d- learn wrote:

struct Container
{
}

static struct Inserter
{
 private Container* container;

 private this(return scope ref Container container) 
@trusted

 {
 this.container = 
 }

}

auto func()()
{
 Container container;
 return Inserter(container);
}

void main()
{
 static assert(!is(typeof(func!(;
}

The code above compiles with dmd 2.085, but not 2.086 (with
-preview=dip1000). What am I doing wrong?


Okay. I clearly looked over what you posted too quickly and 
assumed that the subject was the error that you were actually 
getting. The @trusted there is what's making the static 
asertion fail.


Inserter is able to compile with -dip1000 (or 
-preview=dip1000), because you marked it as @trusted, which 
throws away the scope checks. If you mark it @safe, it won't 
compile. Without -dip1000, I wouldn't have expected anything to 
be caught, but trying it on run.dlang.io, it looks like the 
return probably makes it fail, which I find surprising, since I 
didn't think that return had any effect without -dip25, but I 
haven't done much with return on parameters.


You'd have an easier time figuring out what's going on if you'd 
just not make func a template rather than use the static 
assertion, because then you'd see the compiler errors.


In any case, by using @trusted, you're getting around the scope 
compiler checks, which is why Inserter is able to compile with 
-dip1000. Without -dip1000, I'm not experience enough with 
return parameters to know what the compiler will or won't 
catch, but the code is an @safety probelm regardless. It does 
look like the behavior changed with 2.086 even without -dip1000 
being used, which probably has something to do with how the 
compiler was changed for DIP 1000, though it probably wasn't on 
purpose, since in theory, the behavior shouldn't have changed 
without -dip1000, but I don't know.


- Jonathan M Davis


Yes, reduced code could be a bit better.

@trusted doesn't throw scope checks away (and it wouldn't make 
any sense since I don't see another way to make the code above 
safe). Try:


struct Container
{
}

private Container* stuff(return scope ref Container container) 
@trusted

{
return 
}

auto func()
{
Container container;
return stuff(container);
}

It fails with -dip1000 and works without (as expected).

"return scope ref" parameter in the constructor means, that the 
constructed object has the same scope as the scope of the 
argument.


I just want to know whether the behaviour of 2.085 or 2.086 is 
correct and if it is an "improvement" in 2.086, what I'm doing 
wrong.


Re: Casting to interface not allowed in @safe code?

2019-06-25 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 25 June 2019 at 16:51:46 UTC, Nathan S. wrote:

On Sunday, 23 June 2019 at 21:24:14 UTC, Nathan S. wrote:

https://issues.dlang.org/show_bug.cgi?id=2.


The fix for this has been accepted and is set for inclusion in 
DMD 2.080.


088 :)


Re: return scope ref outlives the scope of the argument

2019-06-25 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 25 June 2019 at 07:32:58 UTC, Eugene Wissner wrote:

struct Container
{
}

static struct Inserter
{
private Container* container;

private this(return scope ref Container container) @trusted
{
this.container = 
}

}

auto func()()
{
Container container;
return Inserter(container);
}

void main()
{
static assert(!is(typeof(func!(;
}

The code above compiles with dmd 2.085, but not 2.086 (with 
-preview=dip1000). What am I doing wrong?


Whatever. https://issues.dlang.org/show_bug.cgi?id=20006


Re: Using D's precise GC when running an app with DUB

2019-05-23 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 23 May 2019 at 14:50:12 UTC, Per Nordlöw wrote:

How do I specify a druntime flag such as

--DRT-gcopt=gc:precise

when running with dub as

dub run --compiler=dmd --build=unittest

?

The precise GC flag was introduced in verison 2.085.0

See:
- https://dlang.org/changelog/2.085.0.html#gc_precise
- https://dlang.org/spec/garbage.html#precise_gc


You can put into the source:

extern(C) __gshared string[] rt_options = [
"gcopt=gc:precise"
];

you can wrap it into some "version ()" and set the version in the 
dub configuration.


Re: Run code before dub dependency's `shared static this()`

2019-05-05 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 5 May 2019 at 08:24:29 UTC, Vladimirs Nordholm wrote:

Hello.

I have dub dependency which has a `shared static this()`.

In my project, can I run code code before the dependency's 
`shared static this()`?



"Static constructors within a module are executed in the lexical 
order in which they appear. All the static constructors for 
modules that are directly or indirectly imported are executed 
before the static constructors for the importer."


Source: https://dlang.org/spec/class.html#static-constructor


Re: Why are immutable array literals heap allocated?

2019-07-04 Thread Eugene Wissner via Digitalmars-d-learn

On Thursday, 4 July 2019 at 10:56:50 UTC, Nick Treleaven wrote:

immutable(int[]) f() @nogc {
return [1,2];
}

onlineapp.d(2): Error: array literal in `@nogc` function 
`onlineapp.f` may cause a GC allocation


This makes dynamic array literals unusable with @nogc, and adds 
to GC pressure for no reason. What code would break if dmd used 
only static data for [1,2]?


immutable(int[]) f() @nogc {
static immutable arr = [1, 2];
return arr;
}

You have to spell it out that the data is static.


Re: strangely silent compiler

2019-08-21 Thread Eugene Wissner via Digitalmars-d-learn

On Wednesday, 21 August 2019 at 13:41:20 UTC, Orfeo wrote:

I've:
```
module anomalo.util;

// Foo doesn't exist  anywhere!

Foo toJsJson(string type, Args...)(string id, Args args) {
   static if (type == "int" || type == "outcome") {
  return Json(["id" : Json(id), "type" : Json(type), 
"value" : Json(0),]);

   } else {
  static assert(0, "invalid type");
   }
}

```

So:
```
$ dub build
```
No error!

```
$ /usr/bin/dmd -lib -ofliba.a -debug -g -w -I. 
src/anomalo/util.d -vcolumns

```

No error!

Here [github](https://github.com/o3o/anomalo) my project.

Thank you


toJsJson is a template. Templates are evaluated first when they 
are instantiated. Compiler doesn't give an error because it 
doesn't compile toJsJson, because you don't instantiate it 
anywhere.


Re: What kind of Editor, IDE you are using and which one do you like for D language?

2019-12-23 Thread Eugene Wissner via Digitalmars-d-learn

On Monday, 23 December 2019 at 15:51:17 UTC, bachmeier wrote:

On Monday, 23 December 2019 at 15:07:32 UTC, H. S. Teoh wrote:
On Sun, Dec 22, 2019 at 05:20:51PM +, BoQsc via 
Digitalmars-d-learn wrote:
There are lots of editors/IDE's that support D language: 
https://wiki.dlang.org/Editors


What kind of editor/IDE are you using and which one do you 
like the most?


Linux is my IDE. ;-)  And I use vim for editing code.


T


Not a Vim user, but wondering if there's Neovim support for D. 
If so, it needs to be added to that wiki table.


Yes, most plugins that support vim 8, support neovim as well and 
vice versa. I'm just using ale, it has built-in D support and 
just uses the compiler/dub. Still haven't time to test 
dcd/language server with something like coc.


Re: Compiler module import graph

2021-03-13 Thread Eugene Wissner via Digitalmars-d-learn

On Saturday, 13 March 2021 at 14:20:01 UTC, frame wrote:

Is there a tool to view module import graph?

The compiler verbose output shows that the module is 
imported/parsed but not why. I wan't to avoid unnecessary 
imports to speed up compile times or avoid issues if the module 
contents are not fully compatible with the current compile 
target being in development.


An external tool: https://github.com/funkwerk/depend.