Re: Line numbers in backtraces (2017)

2017-11-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Thursday, 2 November 2017 at 19:05:46 UTC, Tobias Pankrath 
wrote:
Including Phobos? Your posted backtrace looks to me like 
templates instantiated within Phobos, so I think you'd need 
Phobos with debug symbols for those lines.


---
int main(string[] argv)
{
  return argv[1].length > 0;
}
---


~ [i] % rdmd -g -debug test.d
core.exception.RangeError@test.d(3): Range violation



No difference when I compile with 'dmd -g -debug' and run in 
manually.


That Error is thrown from within druntime. If you want to see 
line numbers for backtraces locations within druntime, you need 
to compile druntime with debug symbols.

Also `-debug` only changes conditional compilation behaviour[1].

[1] https://dlang.org/spec/version.html#DebugCondition


Re: Line numbers in backtraces (2017)

2017-11-01 Thread Moritz Maxeiner via Digitalmars-d-learn
On Wednesday, 1 November 2017 at 06:44:44 UTC, Tobias Pankrath 
wrote:
On Tuesday, 31 October 2017 at 11:21:30 UTC, Moritz Maxeiner 
wrote:
On Tuesday, 31 October 2017 at 11:04:57 UTC, Tobias Pankrath 
wrote:

[...]
??:? pure @safe void 
std.exception.bailOut!(Exception).bailOut(immutable(char)[], 
ulong, const(char[])) [0xab5c9566]
??:? pure @safe bool std.exception.enforce!(Exception, 
bool).enforce(bool, lazy const(char)[], immutable(char)[], 
ulong) [0xab5c94e2]



I've found this StackOverflow Question from 2011 [1] and if I 
remember correctly this could be fixed by adding 
-L--export-dynamic which already is part of my dmd.conf


[...]

[1] 
https://stackoverflow.com/questions/8209494/how-to-show-line-numbers-in-d-backtraces


Does using dmd's `-g` option (compile with debug symbols) not 
work[1]?


[1] This is also what the answer in your linked SO post 
suggest?


Of course I've tried this.


Including Phobos? Your posted backtrace looks to me like 
templates instantiated within Phobos, so I think you'd need 
Phobos with debug symbols for those lines.


Re: SublimeLinter-contrib-dmd: dmd feedback as you type

2017-10-31 Thread Moritz Maxeiner via Digitalmars-d-announce

On Tuesday, 31 October 2017 at 16:00:25 UTC, Bastiaan Veelo wrote:

[...]

Out of curiosity, what other plugins from [2] do you use in 
Sublime Text? How are they integrating with dub?


If that question is open to the general public: None, I hacked my 
own [1] to suit my exact needs.


[1] https://github.com/MoritzMaxeiner/sublide


Re: SublimeLinter-contrib-dmd: dmd feedback as you type

2017-10-31 Thread Moritz Maxeiner via Digitalmars-d-announce

On Tuesday, 31 October 2017 at 13:32:34 UTC, SrMordred wrote:

Thank you , works perfectly!

One idea: Integrating with dub.
So you don´t have to manually set lib dirs and flags since its 
all on 'dub.json' already.


You can pretty much copy paste from sublide for this [1] (my own 
D plugin for ST3).


[1] 
https://github.com/MoritzMaxeiner/sublide/blob/master/dub.py#L40


Re: Line numbers in backtraces (2017)

2017-10-31 Thread Moritz Maxeiner via Digitalmars-d-learn
On Tuesday, 31 October 2017 at 11:04:57 UTC, Tobias Pankrath 
wrote:

[...]
??:? pure @safe void 
std.exception.bailOut!(Exception).bailOut(immutable(char)[], 
ulong, const(char[])) [0xab5c9566]
??:? pure @safe bool std.exception.enforce!(Exception, 
bool).enforce(bool, lazy const(char)[], immutable(char)[], 
ulong) [0xab5c94e2]



I've found this StackOverflow Question from 2011 [1] and if I 
remember correctly this could be fixed by adding 
-L--export-dynamic which already is part of my dmd.conf


[...]

[1] 
https://stackoverflow.com/questions/8209494/how-to-show-line-numbers-in-d-backtraces


Does using dmd's `-g` option (compile with debug symbols) not 
work[1]?


[1] This is also what the answer in your linked SO post suggest?


Re: My first experience as a D Newbie

2017-10-19 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 19 October 2017 at 09:10:04 UTC, Ecstatic Coder 
wrote:


For instance, here I say that I don't agree that the "easy" way 
to use D is by using FreeBSD instead of Windows.


Here is the answer :

"I remember those events very differently, so here they are for 
posterity:


That post of mine is not an answer to that statement, as can be 
seen by what exactly I quoted.


Re: My first experience as a D Newbie

2017-10-19 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 19 October 2017 at 08:17:04 UTC, Ecstatic Coder 
wrote:
On Thursday, 19 October 2017 at 07:04:14 UTC, Moritz Maxeiner 
wrote:
On Thursday, 19 October 2017 at 06:32:10 UTC, Ecstatic Coder 
wrote:

[...]

OK actually my initial proposal was this one :

http://forum.dlang.org/post/mailman.6425.1503876081.31550.digitalmars-d-b...@puremagic.com

[...]

And the definitive answer about that is of course something 
like "Hey man, it's open source, it's all made by volunteers 
on their free time, so it must be complicated, what did you 
expect ?" and "Make all the changes by yourself if you don't 
like it the way it is.".


Seriously ?

OK, message received. If putting two download links per 
detected platform on the main page is too much work for the 
volunteers who maintains the landing page, so let's keep it 
the way it is. I have a lot of work and a family life too, no 
problem...



I remember those events very differently, so here they are for 
posterity:


http://forum.dlang.org/thread/llreleiqxjllthmlg...@forum.dlang.org?page=1
http://forum.dlang.org/post/cxunwfnhdrlpujjxz...@forum.dlang.org


That's exactly what I said.

Thanks for confirming what I have written.


This does not confirm what you wrote, because what you referred 
to as your initial proposal was the result of the previous thread 
I linked to, not the basis for it.




And please stop the personal attacks, thanks.


What I wrote is not - and cannot be construed as - a personal 
attack (as it does not state - or imply - anything about you as a 
person). I consider your accusation slander, however.


Re: My first experience as a D Newbie

2017-10-19 Thread Moritz Maxeiner via Digitalmars-d
On Thursday, 19 October 2017 at 06:32:10 UTC, Ecstatic Coder 
wrote:

[...]

OK actually my initial proposal was this one :

http://forum.dlang.org/post/mailman.6425.1503876081.31550.digitalmars-d-b...@puremagic.com

[...]

And the definitive answer about that is of course something 
like "Hey man, it's open source, it's all made by volunteers on 
their free time, so it must be complicated, what did you expect 
?" and "Make all the changes by yourself if you don't like it 
the way it is.".


Seriously ?

OK, message received. If putting two download links per 
detected platform on the main page is too much work for the 
volunteers who maintains the landing page, so let's keep it the 
way it is. I have a lot of work and a family life too, no 
problem...



I remember those events very differently, so here they are for 
posterity:


http://forum.dlang.org/thread/llreleiqxjllthmlg...@forum.dlang.org?page=1
http://forum.dlang.org/post/cxunwfnhdrlpujjxz...@forum.dlang.org




Re: what means... auto ref Args args?

2017-10-18 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 18 October 2017 at 21:38:41 UTC, Dave Jones wrote:

Poking around in the source code for emplace and I noticed...

T* emplace(T, Args...)(T* chunk, auto ref Args args)

what does the "auto ref" do in this situiation? Cant seem to 
find any explanation in the docs.


It means that any argument (that is an element of args) will be 
passed by reference if and only if it's an lvalue (has a memory 
address that can be taken) (it'll be passed by value otherwise).


https://dlang.org/spec/template.html#auto-ref-parameters


Re: What is the Philosophy of D?

2017-10-16 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 16 October 2017 at 00:25:32 UTC, codephantom wrote:
D's overview page says "It doesn't come with  an overriding 
philosophy."


Is philosophy not important?

I'd like to argue, that the problem of focusing on getting the 
job done quickly and reliably, does *not* leave behind 
maintainable, easy to understand code, but rather it leads to 
unintended outcomes ...


If the philosophy of C, is 'the programmer is in charge', what 
might the philosophy of D be?


e.g. Maximum precision in expression, perhaps?


"Get it done, but also right"


DIP 1009 Status ?

2017-10-11 Thread Moritz Maxeiner via Digitalmars-d
As it has been a while since I've seen an update on DIP 1009 I'd 
like to ask what the current status of it is: Has it been closed 
for feedback and the second stage (submission to language 
authors) been initiated (as [1] requires)?


[1] https://github.com/dlang/DIPs


Re: Why do I have to cast arguments from int to byte?

2017-10-10 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 10 October 2017 at 19:55:36 UTC, Chirs Forest wrote:
I keep having to make casts like the following and it's really 
rubbing me the wrong way:


void foo(T)(T bar){...}

byte bar = 9;

[...]

Why?


Because of integer promotion [1], which is inherited from C.

[1] https://dlang.org/spec/type.html#integer-promotions


Re: D on quora ...

2017-10-06 Thread Moritz Maxeiner via Digitalmars-d

On Friday, 6 October 2017 at 21:12:58 UTC, Rion wrote:


I can make a few simple demos and have D use by default 5 to 10 
more memory then the exact same C or C++ program. While D does 
not actually use it ( its only marked as allocated for the GC 
), it does not dispel the notion or feeling of people that a GC 
= bad.


You can configure the GC to deal with that [1].



Other aspects like being unsure when the GC will trigger can 
also influence people to a non-gc language.


In general: The GC can only trigger when you request memory from 
it.
W.r.t to the current GC implementation, it will trigger when it 
doesn't have enough memory to fulfill an allocation request.
In short: You're always in control of exactly when GC pauses can 
occur. I recommend the GC series for further information [1].


[1] https://dlang.org/spec/garbage.html#gc_config
[2] https://dlang.org/blog/the-gc-series/


Re: Default allocator of container plus element type

2017-10-05 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 5 October 2017 at 11:35:30 UTC, Nordlöw wrote:
Would it be possible to set up a mapping (either formal or 
informal) of each typical container (such as array, linked-list 
etc) plus element type to a suitable default allocator? And 
perhaps add this to the documentation of 
`std.experimental.allocator`?


I currently get the feeling that most programmers have no idea 
at all what allocator to choose for each given combination of 
container and element type they want to use.


I'm skeptical towards such a mapping being universally applicable 
(or even applicable in the majority of cases), because the best 
allocation strategy for a specific use case is not determined 
solely by (container,element), it's (at least) 
(container,element,usage scenario, operating system, hardware) - 
and I'm sure I forgot some.
Realistically, if you care about picking the best memory 
allocator for your use case, imho you should write your 
application to be agnostic to the allocator used and then 
benchmark it with a bunch of different allocators.


Re: Passing data and ownership to new thread

2017-09-26 Thread Moritz Maxeiner via Digitalmars-d
On Tuesday, 26 September 2017 at 09:10:41 UTC, James Brister 
wrote:
I'm pretty new to D, but from what I've seen there are two 
modes of using data across threads: (a) immutable message 
passing and the new thread copies the data if it needs to be 
modified, (b) shared, assuming the data will be modified by 
both threads, and the limits that imposes . But why nothing for 
passing an object of some sort to another thread with the 
ownership moving to the new thread.


If you're talking about the language:
Because D doesn't have any builtin concept of ownership.
If you're talking about Phobos (import std.{...}):
Because a general solution is not a trivial problem (see 
Jonathan's answer for more detail).


I suppose this would be hard enforce at the language level, but 
wouldn't you want this when trying to pass large-ish data 
structures from one thread to another (thinking of a network 
server, such as a DHCP server, that has a thread for handling 
the network interface and it reads the incoming requests and 
then passes off to another thread to handle).


In such server code you're probably better off distributing the 
request reading (and potentially even the client socket 
accepting) to multiple workers, e.g. having multiple threads (or 
processes for that matter, as that can minimize downtime when 
combined with process supervision) listening on their own socket 
with the same address:port (see SO_REUSEPORT).


If you really want to do it, though, the way I'd start going 
about it would be with a classic work queue / thread pool system. 
Below is pseudo code showing how to do that for a oneshot request 
scenario.


[shared data]
work_queue (synchronize methods e.g. with mutex or use a lockfree 
queue)


main thread:
loop:
auto client_socket = accept(...);
// Allocate request on the heap
Request* request = client_socket.readRequest(...);
// Send a pointer to the request to the work queue
work_queue ~= tuple(client_socket, request);
// Poor man's ownership by forgetting about client_socket 
and request here


worker thread:
loop:
...
auto job = work_queue.pop();
scope (exit) { close(job[0]); free(job[1]); }
auto response = job[1].handle();
job[0].writeResponse(response);



Re: Passing data and ownership to new thread

2017-09-26 Thread Moritz Maxeiner via Digitalmars-d
On Tuesday, 26 September 2017 at 09:10:41 UTC, James Brister 
wrote:
I'm pretty new to D, but from what I've seen there are two 
modes of using data across threads: (a) immutable message 
passing and the new thread copies the data if it needs to be 
modified, (b) shared, assuming the data will be modified by 
both threads, and the limits that imposes . But why nothing for 
passing an object of some sort to another thread with the 
ownership moving to the new thread.


If you're talking about Phobos (import std.{...}):
AFAICT because no one has has a strong enough need implement such 
a thing and propose it for Phobos inclusion.

If you're talking about the language:
Because D doesn't have any builtin concept of ownership.

I suppose this would be hard enforce at the language level, but 
wouldn't you want this when trying to pass large-ish data 
structures from one thread to another (thinking of a network 
server, such as a DHCP server, that has a thread for handling 
the network interface and it reads the incoming requests and 
then passes off to another thread to handle).


In such server code you're probably better off distributing the 
request reading (and potentially even the client socket 
accepting) to multiple workers, e.g. having multiple threads (or 
processes for that matter, as that can minimize downtime when 
combined with process supervision) listening on their own socket 
with the same address:port (see SO_REUSEPORT).


If you really want to do it, though, the way I'd start going 
about it would be with a classic work queue / thread pool system. 
Below is pseudo code showing how to do that for a oneshot request 
scenario.


[shared data]
work_queue (protect methods with mutex or use a lockfree queue)

main thread:
loop:
auto client_socket = accept(...);
// Allocate request on the heap
Request* request = client_socket.readRequest(...);
// Send a pointer to the request to the work queue
work_queue ~= tuple(client_socket, request);
// Model "ownership" by forgetting about client_socket 
and request here


worker thread:
loop:
...
auto job = work_queue.pop();
scope (exit) { close(job[0]); free(job[1]); }
auto response = job[1].handle();
client_socket.writeResponse(response);



Re: The case for integer overflow checks?

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 18 September 2017 at 22:32:28 UTC, Dennis Cote wrote:
On Monday, 18 September 2017 at 13:25:55 UTC, Andrei 
Alexandrescu wrote:
For the record, with the help of std.experimental.checkedint, 
the change that fixes the code would be:


malloc(width * height * 4) ==> malloc((checked(width) * height 
* 4).get)


That aborts the application with a message if a multiplication 
overflows.


Can it do something other than abort? Can it throw an overflow 
exception that could be caught to report the error and continue?


Yes. Use one of the provided hooks (e.g. [1][2][3]) or write one 
that fits your use case.


[1] 
https://dlang.org/phobos/std_experimental_checkedint.html#Abort
[2] 
https://dlang.org/phobos/std_experimental_checkedint.html#Throw

[3] https://dlang.org/phobos/std_experimental_checkedint.html#Warn


Re: scope(exit) and destructor prioity

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 18 September 2017 at 20:55:21 UTC, Sasszem wrote:


If I write "auto a = new De()", then it calls the scope first, 
no matter where I place it.


Because with `new`
a) your struct object is located on the heap (and referred to by 
pointer - `De*`) instead of the stack (which means no destructors 
for it are called at function scope end), and
b) the lifetime of your struct object is determined by D's 
garbage collector, which may or may not eventually collect it, 
finalizing it in the process (calling the destructor, as D 
doesn't separate finalizers and destructors a.t.m.).
In your case, it sounds like the GC collection cycle that (in the 
current implementation) occurs just before druntime shutdown 
collects it.

I highly recommend reading The GC Series on the D blog [1].

[1] https://dlang.org/blog/the-gc-series/


Re: My friend can't install DMD 2.076.0 after he deleted contents of C:\D

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d-learn
On Sunday, 17 September 2017 at 05:33:12 UTC, rikki cattermole 
wrote:


Skip Revo-Uninstaller, no idea why you'd ever use such trial 
software.
Anyway what you want is CCleaner, standard software that all 
Windows installs should have on hand.


http://blog.talosintelligence.com/2017/09/avast-distributes-malware.html
https://www.piriform.com/news/blog/2017/9/18/security-notification-for-ccleaner-v5336162-and-ccleaner-cloud-v1073191-for-32-bit-windows-users


Re: OpIndex/OpIndexAssign strange order of execution

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d-learn
On Monday, 18 September 2017 at 15:11:34 UTC, Moritz Maxeiner 
wrote:


gets rewritten to

---
t.opIndex("b").opIndexAssign(t["a"].value, "c");
---


Sorry, forgot one level of rewriting:

---
t.opIndex("b").opIndexAssign(t.opIndex("a").value, "c");
---


Re: OpIndex/OpIndexAssign strange order of execution

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 17 September 2017 at 18:52:39 UTC, SrMordred wrote:

struct Test{ [...] }

Test t;


As described in the spec [1]


t["a"] = 100;


gets rewritten to

---
t.opIndexAssign(100, "a");
---

, while


t["b"]["c"] = t["a"].value;


gets rewritten to

---
t.opIndex("b").opIndexAssign(t["a"].value, "c");
---

, which has to result in your observed output (left-to-right 
evaluation order):




//OUTPUT:
opIndexAssign : index : a , value : 100
opIndex : index : b
opIndex : index : a
property value : 100
opIndexAssign : index : c , value : 100

//EXPECTED OUTPUT
opIndexAssign : index : a , value : 100
opIndex : index : a
property value : 100
opIndex : index : b
opIndexAssign : index : c , value : 100

Is this right?


AFAICT from the spec, yes. Your expected output does not match 
D's rewriting rules for operator overloading.




I find unexpected this mix of operations on left and right side 
of an equal operator.


Adding some more examples to the spec to show the results of the 
rewriting rules could be useful, but AFAICT it's unambiguous.


On Monday, 18 September 2017 at 13:38:48 UTC, SrMordred wrote:

Should I report this as a bug?


Not AFAICT.



I tried a C++ equivalent code and it execute in the expected 
order.


D does not (in general) match C++ semantics.

[1] https://dlang.org/spec/operatoroverloading.html


Re: extern(C) enum

2017-09-18 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 18 September 2017 at 02:04:49 UTC, bitwise wrote:


The following code will run fine on Windows, but crash on iOS 
due to the misaligned access:


Interesting, does iOS crash such a process intentionally, or is 
it a side effect?




char data[8];
int i = 0x;
int* p = (int*)[1];


Isn't this already undefined behaviour (6.3.2.3 p.7 of C11 [1] - 
present in earlier versions also, IIRC)?



*p++ = i;
*p++ = i;
*p++ = i;


The last of these is also a buffer overflow.

[1] http://iso-9899.info/n1570.html


Re: Assertion Error

2017-09-13 Thread Moritz Maxeiner via Digitalmars-d-learn

On Wednesday, 13 September 2017 at 15:12:57 UTC, Vino.B wrote:
On Wednesday, 13 September 2017 at 11:03:38 UTC, Moritz 
Maxeiner wrote:

On Wednesday, 13 September 2017 at 07:39:46 UTC, Vino.B wrote:


Hi Max,

 [...]

Program Code:
[...]
 foreach (string Fs; parallel(SizeDirlst[0 .. $], 1))
 {
auto FFs = Fs.strip;
auto MSizeDirList = task(, FFs, SizeDir);
MSizeDirList.executeInNewThread();
auto MSizeDirListData = MSizeDirList.workForce;
MSresult.get ~= MSizeDirListData;
 }


[...]

---
foreach (string Fs; parallel(SizeDirlst[0 .. $], 1))
{
  MSresult.get ~= coSizeDirList(Fs.strip, SizeDir);
}
---


Hi Max,


It's Moritz, not Max. ;)



 Below is the explanation of the above code.

[...]


AFAICT that's a reason why you want parallelization of 
coSizeDirList, but not why you need to spawn another thread 
inside of an *already parallelelized" task. Try my shortened 
parallel foreach loop vs your longer one and monitor system load 
(threads, memory, etc).


Re: Known reasons why D crashes without any message?

2017-09-13 Thread Moritz Maxeiner via Digitalmars-d
On Wednesday, 13 September 2017 at 10:20:48 UTC, Thorsten Sommer 
wrote:

[...]

Besides the unit tests, the main program is now able to startup 
but crashes after a while without any message at all. No stack 
trace, no exception, nothing. Obviously, this makes it hard to 
debug anything...


[...]

Are there any well-known circumstances, bugs, etc. where an 
abrupt interruption of a D program without any message is 
possible? My expectation was, that I would receive at least a 
stack trace. For debugging, I disabled parallelism at all in 
order to eliminate effects like exceptions are hidden in 
threads, missing/wrong variable sharing, etc.


[...]


Things D generally depends on the platform to deal with (such as 
null pointer dereferences) won't yield you a message from the D 
side.
What is the exit code of the program? If it's of the form `128+n` 
with `n == SIGXYZ` you know more of why it crashed [1]. If the 
exit code is 139 e.g., you know some code tried to access memory 
via an invalid reference (as SIGSEGV == 11 on Linux x64), which 
often means you dereferenced a null pointer somewhere.


[1] http://www.tldp.org/LDP/abs/html/exitcodes.html


Re: Assertion Error

2017-09-13 Thread Moritz Maxeiner via Digitalmars-d-learn

On Wednesday, 13 September 2017 at 07:39:46 UTC, Vino.B wrote:
On Tuesday, 12 September 2017 at 21:01:26 UTC, Moritz Maxeiner 
wrote:

On Tuesday, 12 September 2017 at 19:44:19 UTC, vino wrote:

Hi All,

I have a small piece of code which executes perfectly 8 out 
of 10 times, very rarely it throws an assertion error, so is 
there a way to find which line of code is causing this error.


You should be getting the line number as part of the crash, 
like here:


--- test.d ---
void main(string[] args)
{
assert(args.length > 1);
}
--

-
$ dmd -run test.d

core.exception.AssertError@test.d(3): Assertion failure
[Stack trace]
-

If you don't what are the steps to reproduce?


Hi Max,

 I tried to run the code for at least 80+ time the code ran 
without any issue, will let you know in case if I hit the same 
issue in feature, Below is the piece of code, plese do let me 
know if you find any issue with the below code.


Program Code:
[...]
 foreach (string Fs; parallel(SizeDirlst[0 .. $], 1))
 {
auto FFs = Fs.strip;
auto MSizeDirList = task(, FFs, SizeDir);
MSizeDirList.executeInNewThread();
auto MSizeDirListData = MSizeDirList.workForce;
MSresult.get ~= MSizeDirListData;
 }


From reading I don't see anything that I would expect to assert, 
but I am wondering why you first parallelize your work with a 
thread pool (`parallel(...)`) and then inside each (implicitly 
created) task (that is already being serviced by a thread in the 
thread pool) you create another task, have it executed in a new 
thread, and make the thread pool thread wait for that thread to 
complete servicing that new task.
This should yield the same result, but without the overhead of 
spawning additional threads:


---
foreach (string Fs; parallel(SizeDirlst[0 .. $], 1))
{
  MSresult.get ~= coSizeDirList(Fs.strip, SizeDir);
}
---


Re: Assertion Error

2017-09-12 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 12 September 2017 at 19:44:19 UTC, vino wrote:

Hi All,

I have a small piece of code which executes perfectly 8 out of 
10 times, very rarely it throws an assertion error, so is there 
a way to find which line of code is causing this error.


You should be getting the line number as part of the crash, like 
here:


--- test.d ---
void main(string[] args)
{
assert(args.length > 1);
}
--

-
$ dmd -run test.d

core.exception.AssertError@test.d(3): Assertion failure
[Stack trace]
-

If you don't what are the steps to reproduce?


Re: Adding empty static this() causes exception

2017-09-12 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 12 September 2017 at 19:59:52 UTC, Joseph wrote:
On Tuesday, 12 September 2017 at 10:08:11 UTC, Moritz Maxeiner 
wrote:

On Tuesday, 12 September 2017 at 09:11:20 UTC, Joseph wrote:
I have two nearly duplicate files I added a static this() to 
initialize some static members of an interface.


On one file when I add an empty static this() it crashes 
while the other one does not.


The exception that happens is
Cyclic dependency between module A and B.

Why does this occur on an empty static this? Is it being ran 
twice or something? Anyway to fix this?


The compiler errors because the spec states [1]

Each module is assumed to depend on any imported modules 
being statically constructed first


, which means two modules that import each other and both use 
static construction have no valid static construction order.


One reason, I think, why the spec states that is because in 
theory it would not always be possible for the compiler to 
decide the order, e.g. when executing them changes the 
("shared") execution environment's state:


---
module a;
import b;

static this()
{
// Does something to the OS state
syscall_a();
}
---

---
module b;
import a;

static this()
{
// Also does something to the OS state
syscall_b();
}
---

The "fix" as I see it would be to either not use static 
construction in modules that import each other, or propose a 
set of rules for the spec that define a always solvable subset 
for the compiler.


[1] https://dlang.org/spec/module.html#order_of_static_ctor


The compiler shouldn't arbitrarily force one to make arbitrary 
decisions that waste time and money.


My apologies, I confused compiler and runtime when writing that 
reply (the detection algorithm resulting in your crash is built 
into druntime).

The runtime, however, is compliant with the spec on this AFAICT.

The compiler should only run the static this's once per module 
load anyways, right?


Static module constructors are run once per module per thread [1] 
(if you want once per module you need shared static module 
constructors).


If it is such a problem then some way around it should be 
included: @force static this() { } ?


The only current workaround is what Biotronic mentioned: You can 
customize the druntime cycle detection via the --DRT-oncycle 
command line option [2].


The compiler shouldn't make assumptions about the code I write 
and always choose the worse case, it becomes an unfriendly 
relationship at that point.


If your point remains when replacing 'compiler' with 'runtime': 
It makes no assumptions in the case you described, it enforces 
the language specification.


[1] https://dlang.org/spec/module.html#staticorder
[2] https://dlang.org/spec/module.html#override_cycle_abort


Re: Adding empty static this() causes exception

2017-09-12 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 12 September 2017 at 09:11:20 UTC, Joseph wrote:
I have two nearly duplicate files I added a static this() to 
initialize some static members of an interface.


On one file when I add an empty static this() it crashes while 
the other one does not.


The exception that happens is
Cyclic dependency between module A and B.

Why does this occur on an empty static this? Is it being ran 
twice or something? Anyway to fix this?


The compiler errors because the spec states [1]

Each module is assumed to depend on any imported modules being 
statically constructed first


, which means two modules that import each other and both use 
static construction have no valid static construction order.


One reason, I think, why the spec states that is because in 
theory it would not always be possible for the compiler to decide 
the order, e.g. when executing them changes the ("shared") 
execution environment's state:


---
module a;
import b;

static this()
{
// Does something to the OS state
syscall_a();
}
---

---
module b;
import a;

static this()
{
// Also does something to the OS state
syscall_b();
}
---

The "fix" as I see it would be to either not use static 
construction in modules that import each other, or propose a set 
of rules for the spec that define a always solvable subset for 
the compiler.


[1] https://dlang.org/spec/module.html#order_of_static_ctor




Re: Ranges seem awkward to work with

2017-09-11 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 12 September 2017 at 01:13:29 UTC, Hasen Judy wrote:
Is this is a common beginner issue? I remember using an earlier 
version of D some long time ago and I don't remember seeing 
this concept.




D's ranges can take getting used to, so if you haven't already, 
these two articles are worth the read to get familiar with them 
imho [1][2].
One way to look at it is that input ranges (empty,front,popFront) 
model iteration of the elements of some data source (another is 
that they model a monotonic advancing data source).


[1] 
http://www.drdobbs.com/architecture-and-design/component-programming-in-d/240008321

[2] https://wiki.dlang.org/Component_programming_with_ranges


Re: Address of data that is static, be it shared or tls or __gshared or immutable on o/s

2017-09-11 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 11 September 2017 at 22:38:21 UTC, Walter Bright wrote:


If an address is taken to a TLS object, any relocations and 
adjustments are made at the time the pointer is generated, not 
when the pointer is dereferenced.


Could you elaborate on that explanation more? The way I thought 
about it was that no matter where the data is actually stored 
(global, static, tls, heap, etc.), in order to access it by 
pointer it must be mapped into virtual memory (address) space. 
From that it follows that each thread will have its own "slice" 
of that address space. Thus, if you pass an address into such a 
slice (that happens to be mapped to the TLS of a thread) to other 
threads, you can manipulate the first thread's TLS data (and 
cause the usual data races without proper synchronization, of 
course).


Re: LDC 1.4.0

2017-09-11 Thread Moritz Maxeiner via Digitalmars-d-announce

On Monday, 11 September 2017 at 23:32:55 UTC, kinke wrote:

Hi everyone,

on behalf of the LDC team, I'm glad to announce LDC 1.4.0. The 
highlights of version 1.4 in a nutshell:


* Based on D 2.074.1.
[...]


Fantastic news, thanks for your work!



Re: D on devdocs

2017-09-11 Thread Moritz Maxeiner via Digitalmars-d-announce

On Monday, 11 September 2017 at 03:23:47 UTC, ANtlord wrote:
Hello. I'm not sure that you know, but documentation of D 
language has become to devdocs.io. It is web service provides 
offline documentation. We've got a useful tool for 
documentation viewing and reading. The next step is an 
implementation of version support.


Didn't know about it until it was mentioned in another thread 
here recently [1], but I do think this is great and I especially 
like their dark theme (clean, minimal).
Do you know how much work would it be to reuse devdocs (I see it 
is open source) as a basis for hosting dub package docs)?


[1] 
http://forum.dlang.org/thread/mailman.6556.1504522081.31550.digitalmar...@puremagic.com


Re: betterC and struct destructors

2017-09-11 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 11 September 2017 at 10:18:41 UTC, Oleg B wrote:
Hello. I try using destructor in betterC code and it's work if 
outer function doesn't return value (void). Code in `scope 
(exit)` works as same (if func is void all is ok).


In documentation I found 
https://dlang.org/spec/betterc.html#consequences 12 paragraph: 
Struct deconstructors.


[...]


It's an implementation isssue [1][2][3].

[1] https://issues.dlang.org/show_bug.cgi?id=17603
[2] https://github.com/dlang/dmd/pull/6923
[3] 
https://www.reddit.com/r/programming/comments/6ijwek/dlangs_dmd_now_compiles_programs_in_betterc_mode/dj7dncc/


Re: -betterC and extern(C++) classes

2017-09-10 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 10 September 2017 at 15:08:50 UTC, Yuxuan Shui wrote:

By the way, can we dynamic_cast extern(C++) classes in C++?


It doesn't work for me OOTB with dmd 2.075, at least (though I 
may be missing something):


--- classes.d ---
import core.memory;

extern(C++) class Parent
{
char id()
{
return 'P';
}
}

extern(C++) class Child : Parent
{
override char id()
{
return 'C';
}
}

extern(C++) Parent makeParent()
{
  auto p = new Parent;
  GC.addRoot(cast(void*) p);
  return p;
}

extern(C++) Child makeChild()
{
  auto c = new Child;
  GC.addRoot(cast(void*) c);
  return c;
}

extern(C++) void releaseParent(Parent p)
{
  GC.removeRoot(cast(void*) p);
}

extern(C++) void releaseChild(Child c)
{
  GC.removeRoot(cast(void*) c);
}
-

--- main.cc ---
#include 

class Parent
{
  public:
virtual char id();
};

class Child : public Parent
{
};

extern Parent* makeParent();
extern Child* makeChild();

extern void releaseParent(Parent* p);
extern void releaseChild(Child* c);


extern "C" void rt_init();
extern "C" void rt_term();

bool isChild(Parent* p)
{
  return nullptr != dynamic_cast(p);
}

int main(int argc, char** argv)
{
  rt_init();

  Parent* p = makeParent();
  Child* c = makeChild();

  std::cout << p->id() << "\n";
  std::cout << c->id() << "\n";

  std::cout << "identifier\tChild\n";
  std::cout << "p\t" << isChild(p) << "\t";
  std::cout << "c\t" << isChild(c) << "\t";

  releaseChild(c);
  releaseParent(p);

  rt_term();
  return 0;
}
---

--- Compile ---
$ dmd -c classes.d
$ g++ -std=c++11 -o main main.cc classes.o 
-L/path/to/dmd-2.075/lib64/ -lphobos2

/tmp/ccwjJ9Xe.o: In function `isChild(Parent*)':
main.cc:(.text+0x20): undefined reference to `typeinfo for Parent'
/tmp/ccwjJ9Xe.o:(.rodata._ZTI5Child[_ZTI5Child]+0x10): undefined 
reference to `typeinfo for Parent'

collect2: error: ld returned 1 exit status



Re: -betterC and extern(C++) classes

2017-09-10 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 10 September 2017 at 15:12:12 UTC, Yuxuan Shui wrote:
On Sunday, 10 September 2017 at 14:42:42 UTC, Moritz Maxeiner 
wrote:
On Sunday, 10 September 2017 at 14:04:20 UTC, rikki cattermole 
wrote:

On 10/09/2017 2:19 PM, Moritz Maxeiner wrote:
If TypeInfo for extern(C++) classes is removed, couldn't 
final extern(C++) classes without base class and which don't 
implement any interfaces omit the vtable so that the 
following assert holds:


---
final extern(C++) class Foo {}
static assert (__traits(classInstanceSize, Foo) == 0LU);
---

The reason I ask is that fairly often I have an abstraction 
that's better suited as a reference type than a value type, 
but doesn't need any runtime polymorphy (or the monitor 
standard classes have). Structs + pointers are the only way 
I know of to avoid the (in this special case) unneeded 
vtable overhead, but it always ends up looking worse to read.


We can do it for any class if its final.


Even final classes can always inherit (potentially already 
overridden) virtual methods from their parent classes and 
since all normal D classes inherit from core.object : Object 
[1], which defines virtual methods (`toString`, `toHash`, 
`opCmp, and `opEquals`), I don't see how this can be true.


With a final class reference, we always know what function to 
call at compile time (since it can't be inherited). Therefore 
we don't need a vtable.


---
class Base
{
string name()
{
return "Base";
}
}

class ChildA : Base
{
override string name()
{
return "ChildA";
}
}

final class ChildB : Base
{
override string name()
{
return "ChildB";
}
}

void foo(Base b)
{
import std.stdio;
b.name.writeln;
}

void main(string[] args)
{
Base b;
if (args.length > 1) b = new ChildA;
else b = new ChildB;
b.foo();
}
---

It is not known at compile time which class `foo`'s parameter `b` 
will actually be an instance of (Base, ChildA, or ChildB), so 
static dispatch is not possible; it requires dynamic dispatch. 
Since dynamic dispatch for classes is done in D via vtables, 
ChildB needs a vtable.


Re: -betterC and extern(C++) classes

2017-09-10 Thread Moritz Maxeiner via Digitalmars-d
On Sunday, 10 September 2017 at 14:04:20 UTC, rikki cattermole 
wrote:

On 10/09/2017 2:19 PM, Moritz Maxeiner wrote:
If TypeInfo for extern(C++) classes is removed, couldn't final 
extern(C++) classes without base class and which don't 
implement any interfaces omit the vtable so that the following 
assert holds:


---
final extern(C++) class Foo {}
static assert (__traits(classInstanceSize, Foo) == 0LU);
---

The reason I ask is that fairly often I have an abstraction 
that's better suited as a reference type than a value type, 
but doesn't need any runtime polymorphy (or the monitor 
standard classes have). Structs + pointers are the only way I 
know of to avoid the (in this special case) unneeded vtable 
overhead, but it always ends up looking worse to read.


We can do it for any class if its final.


Even final classes can always inherit (potentially already 
overridden) virtual methods from their parent classes and since 
all normal D classes inherit from core.object : Object [1], which 
defines virtual methods (`toString`, `toHash`, `opCmp, and 
`opEquals`), I don't see how this can be true.


The problem isn't generating the vtable's. But the information 
required for casting.


This applies to normal D classes, but D doesn't support (dynamic) 
casts for extern(C++) classes, anyway, so this shouldn't be an 
issue for them.


[1] 
https://github.com/somzzz/druntime/blob/74882c8a48dd8a827181e3b89c4f0f205c881ac5/src/object.d#L50


Re: -betterC and extern(C++) classes

2017-09-10 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 10 September 2017 at 09:31:55 UTC, Walter Bright wrote:

On 9/10/2017 1:40 AM, Yuxuan Shui wrote:
I was experimenting with -betterC and found out that C++ 
classes doesn't work. Because the resulting object file needs 
a symbol "_D14TypeInfo_Class6__vtblZ" which is in druntime. I 
suppose this is to support T.classinfo?


Could we remove T.classinfo and make classes work under 
-betterC? Or is there some other reason preventing this from 
happening?


Yes, we do want to move towards "Better C++" working in an 
analogous manner, and that means removing the typeinfo 
dependency.


If TypeInfo for extern(C++) classes is removed, couldn't final 
extern(C++) classes without base class and which don't implement 
any interfaces omit the vtable so that the following assert holds:


---
final extern(C++) class Foo {}
static assert (__traits(classInstanceSize, Foo) == 0LU);
---

The reason I ask is that fairly often I have an abstraction 
that's better suited as a reference type than a value type, but 
doesn't need any runtime polymorphy (or the monitor standard 
classes have). Structs + pointers are the only way I know of to 
avoid the (in this special case) unneeded vtable overhead, but it 
always ends up looking worse to read.


Re: Testing whether static foreach is available

2017-09-10 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 10 September 2017 at 01:49:42 UTC, Mike Parker wrote:

On Saturday, 9 September 2017 at 16:53:19 UTC, jmh530 wrote:


version(DLANGSEMVER >= 2.076.0)
{
   //include static foreach code.
}

but we can't do conditionals in version blocks, so you'd need 
a static if. Alternately, we'd also need a trait or something 
to get the semver of the compiler.


Actually:

static if(__VERSION__ >= 2076)

Better to use this than the compiler version, as it specifies 
the version of the front end, so when LDC and GDC catch up it 
will work for those as well.


If __VERSION__ is *guaranteed* to be the language frontend 
version (and testing shows it is in dmd, ldc, and gdc, at least), 
shouldn't the spec [1] state that (right now it's only required 
to be some generic "compiler version")?


[1] https://dlang.org/spec/lex.html#specialtokens



Re: Can attributes trigger functionality?

2017-09-05 Thread Moritz Maxeiner via Digitalmars-d-learn
On Wednesday, 6 September 2017 at 02:43:20 UTC, Psychological 
Cleanup wrote:


I'm having to create a lot of boiler plate code that creates 
"events" and corresponding properties(getter and setter).


I'm curious if I can simplify this without a string mixin.

If I create my own attribute like

@Event double foo();

and I write any code that will trigger when the event is used 
and add more code(such as the setter property and events that I 
need?


Obviously I could write some master template that scans 
everything, but that seems to be far too much over kill. A 
string mixin is probably my only option but is a bit ulgy for 
me.


Since attributes can be defined by structures it seems natural 
that we could put functionality in them that are triggered when 
used but I'm unsure if D has such capabilities.


Thanks.


User defined attributes (UDAs) are in and of themselves only 
(compile time) introspectable decoration [1] (they only carry 
information). If you want to trigger specific behaviour for 
things that are attributed with a UDA you indeed need to some 
custom written active component that introspects using 
`__traits(getAttributes, symbol) and generates injects generates 
the behaviour (e.g. using a string mixin as you noted).


[1] https://dlang.org/spec/attribute.html#UserDefinedAttribute


Re: C `restrict` keyword in D

2017-09-05 Thread Moritz Maxeiner via Digitalmars-d

On Tuesday, 5 September 2017 at 18:32:34 UTC, Johan Engelen wrote:
On Monday, 4 September 2017 at 21:23:50 UTC, Moritz Maxeiner 
wrote:
On Monday, 4 September 2017 at 17:58:41 UTC, Johan Engelen 
wrote:


(The spec requires crashing on null dereferencing, but this 
spec bit is ignored by DMD and LDC, I assume in GDC too.
Crashing on `null` dereferencing requires a null-check on 
every dereferencing through an unchecked pointer, because 0 
might be a valid memory access, and also because 
ptr->someDataField is not going to lookup address 0, but 
0+offsetof(someDataField) instead, e.g. potentially 
addressing a valid low address at 100, say.)


It's not implemented as compiler checks because the "actual" 
requirement is "the platform has to crash on null dereference" 
(see the discussion in/around [1]). Essentially: "if your 
platform doesn't crash on null dereference, don't use D on it 
(at the very least not @safe D)".


My point was that that is not workable. The "null dereference" 
is a D language construct, not something that the machine is 
doing.


While "null dereference" is a language construct "null" is 
defined as actual address zero (like it's defined in C/C++ by 
implementation) and dereference means r/w from/to that virtual 
memory address, it is something the machine does: Namely, memory 
protection, because the page for address 0 is (usually) not 
mapped (and D requires it to not be mapped for @safe to work), 
accessing it will lead to a page fault, which in turn leads to a 
segmentation fault and then program crash.


It's ridiculous to specify that reading from address 1_000_000 
should crash the program, yet that is exactly what is specified 
by D when running this code (and thus null checks need to be 
injected in many places to be spec compliant):


```
struct S {
  ubyte[1_000_000] a;
  int b;
}
void main() {
   S* s = null;
   s.b = 1;
}
```


In order to be spec compliant and correct a compiler would only 
need to inject null checks on dereferences where the size of the 
object being pointed to (in your example S.sizeof) is larger than 
the bottom virtual memory segment of the target OS (the one which 
no C compatible OS maps automatically and you also shouldn't map 
manually).
The size of that bottom segment, however, is usually 
_deliberately_ large precisely so that buggy (C) programs crash 
on NULL dereference (even with structures as the above), so in 
practice, unless you invalidate assumptions about expected 
maximum structure sizes made by the OS, null dereferences can be 
assumed to crash.


Re: C `restrict` keyword in D

2017-09-04 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 4 September 2017 at 17:58:41 UTC, Johan Engelen wrote:
On Monday, 4 September 2017 at 09:47:12 UTC, Moritz Maxeiner 
wrote:

On Monday, 4 September 2017 at 09:15:30 UTC, ag0aep6g wrote:

On 09/04/2017 06:10 AM, Moritz Maxeiner wrote:
Indeed, but it also means that - other than null 
dereferencing - pointer issues can by made into reference 
issues my dereferencing a pointer and passing that into a 
function that takes that parameter by reference.


Why "other than null dereferencing"? You can dereference a 
null pointer and pass it in a ref parameter. That doesn't 
crash at the call site, but only when the callee accesses the 
parameter:


[...]


Because I was ignorant and apparently wrong, thanks for the 
correction.
Still, though, this is surprising to me, because this means 
taking the address of a parameter passed by reference (which 
is in your case is typed as an existing int) can be null. Is 
this documented somewhere (couldn't find it in the spec and it 
seems like a bug to me)?


LDC treats passing `null` to a reference parameter as UB.
It doesn't matter when the program crashes after passing null 
to ref, exactly because it is UB.


Ok, that's good to know, though it'd be nice for this to be 
defined somewhere in the language spec.


Because the caller has to do the dereferencing (semantically) 
you only have to do the null-check in the caller, and not in 
callee. This removes a ton of manual null-ptr checks from the 
code, and enables more optimizations too.


Indeed, which is why I currently think the spec should state that 
this isn't UB, but has to crash at the call site.


For class parameters, they are pointers not references, as in: 
it is _not_ UB to pass-in `null`. Very unfortunate, because it 
necessitates null-ptr checks everywhere in the code, and hurts 
performance due to missed optimization opportunities.


Well, technically they are "class references". In any case, they 
don't require injecting null checks from the compiler in general, 
as using them in any way will be a null dereference (which the 
hardware are required to turn into a crash).




(The spec requires crashing on null dereferencing, but this 
spec bit is ignored by DMD and LDC, I assume in GDC too.
Crashing on `null` dereferencing requires a null-check on every 
dereferencing through an unchecked pointer, because 0 might be 
a valid memory access, and also because ptr->someDataField is 
not going to lookup address 0, but 0+offsetof(someDataField) 
instead, e.g. potentially addressing a valid low address at 
100, say.)


It's not implemented as compiler checks because the "actual" 
requirement is "the platform has to crash on null dereference" 
(see the discussion in/around [1]). Essentially: "if your 
platform doesn't crash on null dereference, don't use D on it (at 
the very least not @safe D)".
The issue concerning turning a pointer into a reference parameter 
is that when reading the code it looks like the dereference is 
happening at the call site, while the resulting compiled 
executable will actually perform the (null) dereference inside 
the function on use of the reference parameter. That is why I 
think the null check should be injected at the call site, because 
depending on platform support for the crash will may yield the 
wrong result (if the reference parameter isn't actually used in 
the function, it won't crash, even though it *should*).


[1] 
https://forum.dlang.org/post/udkdqogtrvanhbotd...@forum.dlang.org


Re: C `restrict` keyword in D

2017-09-04 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 4 September 2017 at 10:24:48 UTC, ag0aep6g wrote:

On 09/04/2017 11:47 AM, Moritz Maxeiner wrote:
Still, though, this is surprising to me, because this means 
taking the address of a parameter passed by reference (which 
is in your case is typed as an existing int) can be null. Is 
this documented somewhere (couldn't find it in the spec and it 
seems like a bug to me)?


I'm only aware of this part of the spec, which doesn't say much 
about ref parameters:


https://dlang.org/spec/function.html#parameters

g++ accepts the equivalent C++ code and shows the same 
behavior. But, as far as I can tell, it's undefined behavior 
there, because dereferencing null has undefined behavior.


In D, dereferencing a null pointer is expected to crash the 
program. It's allowed in @safe code with that expectation. So 
it seems to have defined behavior that way.


Yes, which is why I wrongly assumed that turning a null pointer 
into a reference would crash the program (as such references 
can't be tested for being null, you'd have to turn them back into 
a pointer to test).




But if a dereferencing null must crash the program, shouldn't 
my code crash at the call site? Or is there an exception for 
ref parameters? Any way, the spec seems to be missing some 
paragraphs that clear all this up.


Yes, that is what I meant by saying it looks like a bug to me. It 
really ought to crash at the call site imho; this would require 
injecting null checks at the call site when the argument is a 
pointer dereference.


Re: C `restrict` keyword in D

2017-09-04 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 4 September 2017 at 09:15:30 UTC, ag0aep6g wrote:

On 09/04/2017 06:10 AM, Moritz Maxeiner wrote:
Indeed, but it also means that - other than null dereferencing 
- pointer issues can by made into reference issues my 
dereferencing a pointer and passing that into a function that 
takes that parameter by reference.


Why "other than null dereferencing"? You can dereference a null 
pointer and pass it in a ref parameter. That doesn't crash at 
the call site, but only when the callee accesses the parameter:


[...]


Because I was ignorant and apparently wrong, thanks for the 
correction.
Still, though, this is surprising to me, because this means 
taking the address of a parameter passed by reference (which is 
in your case is typed as an existing int) can be null. Is this 
documented somewhere (couldn't find it in the spec and it seems 
like a bug to me)?


Re: C `restrict` keyword in D

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 4 September 2017 at 02:43:48 UTC, Uknown wrote:
On Sunday, 3 September 2017 at 16:55:51 UTC, Moritz Maxeiner 
wrote:

On Sunday, 3 September 2017 at 15:39:58 UTC, Uknown wrote:
On Sunday, 3 September 2017 at 12:59:25 UTC, Moritz Maxeiner 
wrote:

[...]
The main issue I see is that pointers/references can change 
at runtime, so I don't think a static analysis in the 
compiler can cover this in general (which, I think, is also 
why the C99 keyword is an optimization hint only).


Well, I thought about it, I have to agree with you, as far as 
pointers go. There seems to be no simple way in which the 
compiler can safely ensure that the two restrict pointers 
point to the same data. But fir references, it seems trivial.


References are just non-null syntax for pointers that take 
addresses implicitly on function call. Issues not related to 
null that pertain to pointers translate to references, as any 
(non-null) pointer can be turned into a reference (and vice 
versa):


---
void foo(int* a, bool b)
{
if (b) bar(a);
else baz(*a);
}

void bar(int* a) {}
void baz(ref int a) { bar(); }
---


Yes. But this is what makes them so useful. You don't have to 
worry about null dereferences.


Indeed, but it also means that - other than null dereferencing - 
pointer issues can by made into reference issues my dereferencing 
a pointer and passing that into a function that takes that 
parameter by reference.






In order to do so, RCArray would have to first annotate it's 
opIndex, opSlice and any other data returning member 
functions with the restrict keyword. e.g.

struct RCArray(T) @safe
{
private T[] _payload;
	/+some other functions needed to implement RCArray 
correctly+/

restrict ref T opIndex(size_t i) {
//implimentation as usual
return _payload[i];
}
restrict ref T opIndex() {
return _payload;
}
//opSlice and the rest defined similary
}
[...]


Note: There's no need to attribute the RCArray template as 
@safe (other than for debugging when developing the template). 
The compiler will derive it for each member if they are indeed 
@safe.


Indeed. I just wrote it to emphasize on the fact that its safe.

W.r.t. the rest: I don't think treating references as 
different from pointers can be done correctly, as any 
pointers/references can be interchanged at runtime.


I'm not sure I understand how one could switch between pointers 
and refs at runtime. Could you please elaborate a bit or link 
to an example? Thanks.


What I meant (and apparently poorly expressed) is that you can 
turn a pointer into a reference (as long as it's not null) and 
taking the address of a "ref" yields a pointer and as in my `foo` 
example in the above, which path is taken can change at runtime. 
You can, e.g. generate a reference to an object's member without 
the compiler being able to detect it by calculating the 
appropriate pointer and then dereferencing it.




Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Monday, 4 September 2017 at 03:08:50 UTC, EntangledQuanta 
wrote:
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz 
Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't 
change that we would still be overloading the same keyword 
with three vastly different meanings. Two is already bad 
enough imho (and if I had a good idea with what to replace 
the "in" for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may 
be very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization".


I don't agree, because once something is in the language 
syntax, removing it is a long deprecation process (years), so 
these things have to be considered well beforehand.


That's true. But I don't see how it matters to much in the 
current argument. Remember, I'm not advocating using 'in' ;) 
[...]


It matters, because that makes it not be _early_ optimization.






If we are worried about saving time then what about the 
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as 
you know, won't necessarily save much time.


Their speed generally does not affect the time one has to 
spend to understand a piece of code.


Yes, but you are picking and choosing. [...]


I'm not (in this case), as the picking is implied by discussing 
PL syntax.


So, in this case I have to go with the practical of saying 
that it may be theoretically slower, but it is such an 
insignificant cost that it is an over optimization. I think 
you would agree, at least in this case.


Which is why I stated I'm opposing overloading `in` here as a 
matter of principle, because even small costs sum up in the 
long run if we get into the habit of just overloading.




I know, You just haven't convinced me enough to change my 
opinion that it really matters at the end of the day. It's 
going to be hard to convince me since I really don't feel as 
strongly as you do about it. That might seem like a 
contradiction, but


I'm not trying to convince you of anything.



Again, the exact syntax is not import to me. If you really 
think it matters that much to you and it does(you are not 
tricking yourself), then use a different keyword.


My proposal remains to not use a keyword and just upgrade 
existing template specialization.


[...]

You just really haven't stated that principle in any clear way 
for me to understand what you mean until now. i.e., Stating 
something like "... of a matter of principle" without stating 
which principle is ambiguous. Because some principles are not 
real. Some base their principles on fictitious things, some on 
abstract ideals, etc. Basing something on a principle that is 
firmly established is meaningful.


I've stated the principle several times in varied forms of 
"syntax changes need to be worth the cost".


I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more 
symbols. I would imagine you are against stuff like using 
"in1", "in2", etc because they visibly are to close to each 
other.


It's not an absolute restriction, it's an absolute position 
from which I argue against including such overloading on 
principle.
If it can be overcome by demonstrating that it can't 
sensibly be done without more overloading and that it adds 
enough value to be worth the increases overloading, I'd be 
fine with inclusion.


[...]

To simplify it down: Do you have the sample problems with all 
the ambiguities that already exist in almost all programming 
languages that everyone is ok with on a practical level on a 
daily basis?


Again, you seem to mix ambiguity and context sensitivity.
W.r.t. the latter: I have a problem with  those occurences 
where I don't think the costs I associate with it are 
outweighed by its benefits (e.g. with the `in` keyword 
overloaded meaning for AA's).


Not mixing, I exclude real ambiguities because have no real 
meaning. I thought I mentioned something about that way back 
when, but who knows... Although, I'd be curious if any 
programming languages existed who's grammar was ambiguous and 
actually could be realized?


Sure, see the dangling else problem I mentioned. It's just that 
people basically all agree on one of the choices and all stick 
with it (despite the grammar being formally 

Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, 
EntangledQuanta wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with 
three vastly different meanings. Two is already bad enough 
imho (and if I had a good idea with what to replace the "in" 
for AA's I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one 
meaning you also have to consider the context it appears in. 
That is intrinsically more work (though the difference may be 
very small) and thus harder.


...


Yes, In an absolute sense, it will take more time to have to 
parse the context. But that sounds like a case of 
"pre-optimization".


I don't agree, because once something is in the language syntax, 
removing it is a long deprecation process (years), so these 
things have to be considered well beforehand.


If we are worried about saving time then what about the 
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as 
you know, won't necessarily save much time.


Their speed generally does not affect the time one has to spend 
to understand a piece of code.




Maybe the language itself should be designed so there are no 
ambiguities at all? A single simple for each function? A new 
keyboard design should be implemented(ultimately a direct brain 
to editor interface for the fastest time, excluding the time 
for development and learning)?


I assume you mean "without context sensitive meanings" instead of 
"no ambiguities", because the latter should be the case as a 
matter of course (and mostly is, with few exceptions such as the 
dangling else ambiguity in C and friends).
Assuming the former: As I stated earlier, it needs to be worth 
the cost.




So, in this case I have to go with the practical of saying that 
it may be theoretically slower, but it is such an insignificant 
cost that it is an over optimization. I think you would agree, 
at least in this case.


Which is why I stated I'm opposing overloading `in` here as a 
matter of principle, because even small costs sum up in the long 
run if we get into the habit of just overloading.


Again, the exact syntax is not import to me. If you really 
think it matters that much to you and it does(you are not 
tricking yourself), then use a different keyword.


My proposal remains to not use a keyword and just upgrade 
existing template specialization.




When I see something I try to see it at once rather [...]




To really counter your argument: What about parenthesis? They 
too have the same problem with in. They have perceived 
ambiguity... but they are not ambiguity. So your argument 
should be said about them too and you should be against them 
also, but are you? [To be clear here: foo()() and (3+4) have 3 
different use cases of ()'s... The first is templated 
arguments, the second is function arguments, and the third is 
expression grouping]


That doesn't counter my argument, it just states that parentheses 
have these costs, as well (which they do). The primary question 
would still be if they're worth that cost, which imho they are. 
Regardless of that, though, since they are already part of the 
language syntax (and are not going to be up for change), this is 
not something we could do something about, even if we agreed they 
weren't worth the cost.
New syntax, however, is up for that kind of discussion, because 
once it's in it's essentially set in stone (not quite, but *very* 
slow to remove/change because of backwards compatibility).



[...]


Well, yes, as I wrote, I think it is unambiguous (and can 
thus be used), I just think it shouldn't be used.


Yes, but you have only given the reason that it shouldn't be 
used because you believe that one shouldn't overload keywords 
because it makes it harder to parse the meaning. My rebuttal, 
as I have said, is that it is not harder, so your argument is 
not valid. All you could do is claim that it is hard and we 
would have to find out who is more right.


As I countered that in the above, I don't think your rebuttal 
is valid.


Well, hopefully I countered that in my rebuttal of your 
rebuttal of my rebuttal ;)


Not as far as I see it, though I'm willing to agree to disagree :)


I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more symbols. 
I would imagine you are against stuff like using "in1", 
"in2", etc because they visibly are to close to each other.


It's not an absolute restriction, it's an absolute position 
from which I argue against including such overloading 

Re: C `restrict` keyword in D

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 3 September 2017 at 15:39:58 UTC, Uknown wrote:
On Sunday, 3 September 2017 at 12:59:25 UTC, Moritz Maxeiner 
wrote:

[...]
The main issue I see is that pointers/references can change at 
runtime, so I don't think a static analysis in the compiler 
can cover this in general (which, I think, is also why the C99 
keyword is an optimization hint only).


Well, I thought about it, I have to agree with you, as far as 
pointers go. There seems to be no simple way in which the 
compiler can safely ensure that the two restrict pointers point 
to the same data. But fir references, it seems trivial.


References are just non-null syntax for pointers that take 
addresses implicitly on function call. Issues not related to null 
that pertain to pointers translate to references, as any 
(non-null) pointer can be turned into a reference (and vice 
versa):


---
void foo(int* a, bool b)
{
if (b) bar(a);
else baz(*a);
}

void bar(int* a) {}
void baz(ref int a) { bar(); }
---



In order to do so, RCArray would have to first annotate it's 
opIndex, opSlice and any other data returning member functions 
with the restrict keyword. e.g.

struct RCArray(T) @safe
{
private T[] _payload;
/+some other functions needed to implement RCArray correctly+/
restrict ref T opIndex(size_t i) {
//implimentation as usual
return _payload[i];
}
restrict ref T opIndex() {
return _payload;
}
//opSlice and the rest defined similary
}
[...]


Note: There's no need to attribute the RCArray template as @safe 
(other than for debugging when developing the template). The 
compiler will derive it for each member if they are indeed @safe.


W.r.t. the rest: I don't think treating references as different 
from pointers can be done correctly, as any pointers/references 
can be interchanged at runtime.




Coming back to pointers, the only way I can see (short of 
bringing Rust's borrow checker to D) is to add additional 
annotations to function return values. The problem comes with 
code like this :


int * foo() @safe
{
static int[1] data;
return [0];
}
void main()
{
int * restrict p1 = foo();
int * restrict p2 = foo();//Should be error, but the 
compiler can't figure
  //this out without further 
annotations

}


Dealing with pointer aliasing in a generic way is a hard problem 
:p





Re: C++ / Why Iterators Got It All Wrong

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d
On Sunday, 3 September 2017 at 14:19:19 UTC, Ilya Yaroshenko 
wrote:
On Sunday, 3 September 2017 at 12:35:16 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 09:24:03 UTC, Ilya Yaroshenko 
wrote:
On Sunday, 3 September 2017 at 02:43:51 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 02:08:20 UTC, Ilya Yaroshenko 
wrote:
On Tuesday, 29 August 2017 at 12:50:08 UTC, Robert M. Münch 
wrote:
Maybe of interest: 
https://www.think-cell.com/en/career/talks/iterators/#1


I haven't read everything, so not sure if it worth to take 
a look.


Iterators has no proper alternative when one need to 
implement generic tensor library like Mir Algorithm [1].


Out of curiosity: Could you elaborate on what the issues are 
with using a range based API internally (if it's 
performance, the "why")?


There are three general kinds of dynamic dense tensors. Mir 
implements all of them:


1. Contiguous tensors. Their data is located contiguously in 
memory. Single dense memory chunk. All strides between 
subs-tensors can be computed from lengths.


2. Canonical tensors. Only data for one dimension is dense, 
other dimensions has strides that can not be computed from 
lengths. BLAS matrixes are canonical tensors: they have two 
lengths and one stride.


3. Universal tensors. Each dimension has a stride. Numpy 
ndarrays are universal tensors.


Finite random access range Issues:

1. Main API issue is that full featured random access range 
(RAR) API (with slicing, and all opIndex* operators like 
`[]*=` ) is much larger comparing with unbounded random 
access iterator (the same API as pointer).


Won't you have to implement opIndex* operators either way in 
order to use the `a[]` syntax? The main difference I can see 
should be that you'd also have to implement the InputRange 
(front,popFront,empty), ForwardRange (save), and 
BidirectionalRange (back,popBack) members, which if you don't 
need them, is indeed understandably too much?


front,
popFront,
empty,
save,
back,
popBack,
opIndexOpAssign,
2 x opIndex(.opSlice) for rhs scalars and ranges
2 x opIndexAssign(.opSlice) for rhs scalars and ranges
2 x opIndexOpAssign(.opSlice) for rhs scalars and ranges

Plus, because of range topology slicing may be slower good to 
add this ones:


popFrontN
popFrontExactly,
popBackN
popBackExactly,

and maybe i forgot something ...


Didn't know about the *N *Exactly ones, but yeah, it's a lot. 
Though unless you aren't interesting in the `a[]` syntax, you 
can't avoid opIndex*.






2. Random access range holds its length. Yes, Phobos has 
infinity ranges (mir hasn't), but the practice is that almost 
any RAR constructed using Phobos has length and you can not 
strip when type is constructed. Anyway, infinity RAR is just 
a pointer/iterator that can move forward but can not move 
backward. Tensors hold their own lengths for each dimensions, 
so range's length is just a useless payload.


I'm not sure what you mean here. Is this still about accessing 
the elements of one tensor? If so, what do Phobos' ranges have 
to do with your tensor type's API?


ndslice is:
1. Slice structure with a huge amount of features from mir 
algorithm
2. multidimensional RAR: .front!0, front!1, .front!(N - 1) and 
all other generalized RAR primitves including multidimensional 
index [1, 2, 3, ...] and slicing [1 .. 4, 3 .. $].
3. Common one dimensional RAR on top of outer most dimension. 
So one can use Phobos funcitons for Slice structure. For 
example, a matrix will be iterated row-by-row.


But this paragraph was about another issue:

struct BLASMatrixTypeBasedOnRanges
{
   double[] _data; // arrays is the simplest RAR
   size_t rows, cols;
   size_t stride;
}


Now I get what you mean, you were talking about not using the 
dynamic arrays built into the language (which indeed carry their 
length for safety purposes), not about exposing range API here; 
you're right, you don't need to use a dynamic array here, as you 
already have the length encoded another way (it would be 
wasteful), but strictly speaking D's builtin dynamic arrays are 
not ranges (as they are neither aggregate types nor do they have 
the range functions baked into the language). They only become 
ranges when you import the appropriate free functions from Phobos 
(or define some yourself).




sizeof(YourBLASMatrixTypeBasedOnRanges) == size_t.sizeof * 5;

in other hand:

sizeof(Slice!(Canonical, [2], double*)) == size_t.sizeof * 4;

Ranges requires for 25% more space for canonical matrixes then 
iterators.


We do have to differentiate between a container and the API with 
which to iterate over its elements.
D's dynamic arrays (a pointer+length) require more space then 
using only a raw pointer, of course. If that's more than an 
iterator depends on the type of iterator you use. Most common 
iterator implementations I know consist of a begin and an end 
pointer, essentially requiring the same space as D's dynamic 
arrays.
In the case you describe, though, we aren't talking 

Re: C `restrict` keyword in D

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 3 September 2017 at 06:11:10 UTC, Uknown wrote:
On Sunday, 3 September 2017 at 03:49:21 UTC, Moritz Maxeiner 
wrote:

On Sunday, 3 September 2017 at 03:04:58 UTC, Uknown wrote:

[...]

void foo(ref RCArray!int arr, ref int val) @safe
{
{
	auto copy = arr; //arr's (and copy's) reference counts are 
both 2

arr = RCArray!int([]); // There is another owner, so arr
   // forgets about the old payload
} // Last owner of the array ('copy') gets destroyed and 
happily

  // frees the payload.
val = 3; // Oops.
}

Here, adding `restrict` to foo's parameters like so :

void foo(restrict ref RCArray!int arr, restrict ref int val)

would make the compiler statically enforce the fact that 
neither references are pointing to the same data. This would 
cause an error in main, since arr[0] is from the same block 
of memory as arr.


How does the compiler know which member of RCArray!int to 
check for pointing to the same memory chunk as val?


If I understand C's version of restrict correctly, the pointers 
must not refer to the same block. So extending the same here, 
val should not be allowed to be a reference to any members of 
RCArray!int.


AFAICT that's not what's needed here for safety, though: RCArray 
will have a member (`data`, `store`, or something like that) 
pointing to the actual elements (usually on the heap). You 
essentially want `val` not to point into the same memory chunk as 
`data` points into, which is different from `val` not to point to 
a member of RCArray.




This does seem to get get more confusing when the heap is 
involved as a member of a struct.

e.g.
[...]


Right, that's essentially what an RCArray does, as well.

I feel that in this case, the compiler should throw an error, 
since val would be a reference to a member pointed to by 
_someArr, which is a member of x. Although, I wonder if such 
analysis would be feasible? This case is trivial, but there 
could be more complicated cases.


The main issue I see is that pointers/references can change at 
runtime, so I don't think a static analysis in the compiler can 
cover this in general (which, I think, is also why the C99 
keyword is an optimization hint only).


Re: C++ / Why Iterators Got It All Wrong

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d
On Sunday, 3 September 2017 at 08:37:36 UTC, Robert M. Münch 
wrote:

On 2017-09-02 21:27:58 +, Moritz Maxeiner said:


Thanks for your post about Rebol, I didn't know it before.


As said, the official Rebol-2 version is a dead-end. Even our 
main product is still based on it :-) 15 years old technology, 
but still working and we know all problemes. So very few 
surprises. And, it's very productive.


There exists a half-done official Rebol-3 version as 
open-source. It was picked up by some and continued. And than 
there is a thing called Red, which uses a lot of ideas but 
compiles. Worth a look too. It's really cool, because it 
compiles native apps for Android without any SDK for example.


Overall, it's worth to spend some time with Rebol. I'm sure you 
won't your time back and can learn a lot. Things to look at: 
VIEW (GUI), PARSE (for parsing) and after this using PARSE to 
create DSL with Rebol. Very cool feature.


I'll put in on my ever growing list of things to check out in 
depth, thanks :p





After reading through the series chaper, though, AFAICT Rebol 
series *are* iterators (begin+end), just with really nice, 
functional (read: LISP) syntax?


There is no difference between code & data in Rebol. And it has 
a very rich set of datatpye, IIRC about 35 native ones. And 
many of them are series, which can be operated in the same way.


Sounds like LISP :)



From my experience, traversing datastructures with a functional 
syntax and concept is really natural to work with. It's mostly 
what you would tell someone to do using english.


I agree, though I was talking about what the abstract data type 
of a "series" is, i.e. what operations is exposes. From my 
observation:

A D input range exposes via empty/front/popFront.
A classic iterator exposes via begin/end.
A Rebol series seems to be a safer form of iterator, as it 
doesn't expose begin/end directly, but exposes restricted 
operations that are defined as manipulating begin/end.


Re: C++ / Why Iterators Got It All Wrong

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d
On Sunday, 3 September 2017 at 09:24:03 UTC, Ilya Yaroshenko 
wrote:
On Sunday, 3 September 2017 at 02:43:51 UTC, Moritz Maxeiner 
wrote:
On Sunday, 3 September 2017 at 02:08:20 UTC, Ilya Yaroshenko 
wrote:
On Tuesday, 29 August 2017 at 12:50:08 UTC, Robert M. Münch 
wrote:
Maybe of interest: 
https://www.think-cell.com/en/career/talks/iterators/#1


I haven't read everything, so not sure if it worth to take a 
look.


Iterators has no proper alternative when one need to 
implement generic tensor library like Mir Algorithm [1].


Out of curiosity: Could you elaborate on what the issues are 
with using a range based API internally (if it's performance, 
the "why")?


There are three general kinds of dynamic dense tensors. Mir 
implements all of them:


1. Contiguous tensors. Their data is located contiguously in 
memory. Single dense memory chunk. All strides between 
subs-tensors can be computed from lengths.


2. Canonical tensors. Only data for one dimension is dense, 
other dimensions has strides that can not be computed from 
lengths. BLAS matrixes are canonical tensors: they have two 
lengths and one stride.


3. Universal tensors. Each dimension has a stride. Numpy 
ndarrays are universal tensors.


Finite random access range Issues:

1. Main API issue is that full featured random access range 
(RAR) API (with slicing, and all opIndex* operators like `[]*=` 
) is much larger comparing with unbounded random access 
iterator (the same API as pointer).


Won't you have to implement opIndex* operators either way in 
order to use the `a[]` syntax? The main difference I can see 
should be that you'd also have to implement the InputRange 
(front,popFront,empty), ForwardRange (save), and 
BidirectionalRange (back,popBack) members, which if you don't 
need them, is indeed understandably too much?




2. Random access range holds its length. Yes, Phobos has 
infinity ranges (mir hasn't), but the practice is that almost 
any RAR constructed using Phobos has length and you can not 
strip when type is constructed. Anyway, infinity RAR is just a 
pointer/iterator that can move forward but can not move 
backward. Tensors hold their own lengths for each dimensions, 
so range's length is just a useless payload.


I'm not sure what you mean here. Is this still about accessing 
the elements of one tensor? If so, what do Phobos' ranges have to 
do with your tensor type's API?




3. Ranges can not move backward (auto b = a[-4 .. $].). This 
means one can not implement dynamic `reversed`(flip) [1, 2] 
operation for dimensions that have strides (universal or 
canonical tensors). _Dynamic_ means that instead of construct a 
new type with reversed order for a dimension, we just negate 
the corresponding stride and move the cursor.


Right, that is indeed a limitation of the range abstraction 
(though a type could expose both functionality to move backwards 
_and_ the range API) and if you need that, it makes sense not to 
use ranges.


Thanks for the summary of issues :)



Re: Bug in D!!!

2017-09-03 Thread Moritz Maxeiner via Digitalmars-d-learn
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta 
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
wrote:

[...]


The contexts being independent of each other doesn't change 
that we would still be overloading the same keyword with three 
vastly different meanings. Two is already bad enough imho (and 
if I had a good idea with what to replace the "in" for AA's 
I'd propose removing that meaning).


Why? Don't you realize that the contexts matters and [...]


Because instead of seeing the keyword and knowing its one meaning 
you also have to consider the context it appears in. That is 
intrinsically more work (though the difference may be very small) 
and thus harder.




Again, I'm not necessarily arguing for them, just saying that 
one shouldn't avoid them just to avoid them.






[...]


It's not about ambiguity for me, it's about readability. The 
more significantly different meanings you overload some 
keyword - or symbol, for that matter - with, the harder it 
becomes to read.


I don't think that is true. Everything is hard to read. It's 
about experience. The more you experience something the more 
clear it becomes. Only with true  ambiguity is something 
impossible. I realize that in one can design a language to be 
hard to parse due to apparent ambiguities, but am I am talking 
about cases where they can be resolved immediately(at most a 
few milliseconds).


Experience helps, of course, but it doesn't change that it's 
still just that little bit slower. And everytime we encourage 
such overloading encourages more, which in the end sums up.




You are making general statements, and it is not that I 
disagree, but it depends on context(everything does). In this 
specific case, I think it is extremely clear what in means, so 
it is effectively like using a different token. Again, everyone 
is different though and have different experiences that help 
them parse things more naturally. I'm sure there are things 
that you might find easy that I would find hard. But that 
shouldn't stop me from learning about them. It makes me 
"smarter", to simplify the discussion.


I am, because I believe it to be generally true for "1 keyword 
|-> 1 meaning" to be easier to read than "1 keyword and 1 context 
|-> 1 meaning" as the former inherently takes less time.






[...]


Well, yes, as I wrote, I think it is unambiguous (and can thus 
be used), I just think it shouldn't be used.


Yes, but you have only given the reason that it shouldn't be 
used because you believe that one shouldn't overload keywords 
because it makes it harder to parse the meaning. My rebuttal, 
as I have said, is that it is not harder, so your argument is 
not valid. All you could do is claim that it is hard and we 
would have to find out who is more right.


As I countered that in the above, I don't think your rebuttal is 
valid.




I have a logical argument against your absolute restriction 
though... in that it causes one to have to use more symbols. I 
would imagine you are against stuff like using "in1", "in2", 
etc because they visibly are to close to each other.


It's not an absolute restriction, it's an absolute position from 
which I argue against including such overloading on principle.
If it can be overcome by demonstrating that it can't sensibly be 
done without more overloading and that it adds enough value to be 
worth the increases overloading, I'd be fine with inclusion.





[...]


I would much rather see it as a generalization of existing 
template specialization syntax [1], which this is t.b.h. just 
a superset of (current syntax allows limiting to exactly one, 
you propose limiting to 'n'):


---
foo(T: char)  // Existing syntax: Limit T to the single 
type `char`
foo(T: (A, B, C)) // New syntax:  Limit T to one of A, B, 
or C

---


Yes, if this worked, I'd be fine with it. Again, I could care 
less. `:` == `in` for me as long as `:` has the correct meaning 
of "can be one of the following" or whatever.


But AFAIK, : is not "can be one of the following"(which is "in" 
or "element of" in the mathematical sense) but can also mean 
"is a derived type of".


Right, ":" is indeed an overloaded symbol in D (and ironically, 
instead of with "in", I think all its meanings are valuable 
enough to be worth the cost). I don't see how that would 
interfere in this context, though, as we don't actually overload 
a new meaning (it's still "restrict this type to the thing to the 
right").








If that is the case then go for it ;) It is not a concern of 
mine. You tell me the syntax and I will use it. (I'd have no 
choice, of course, but if it's short and sweet then I won't 
have any problem).


I'm discussing this as a matter of theory, I don't have a use for 
it.





[...]


Quoting a certain person (you know who you are) from DConf 
2017: "Write a DIP".
I'm quite happy to discuss this idea, but at the end of 

Re: C `restrict` keyword in D

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 3 September 2017 at 03:04:58 UTC, Uknown wrote:

[...]

void foo(ref RCArray!int arr, ref int val) @safe
{
{
	auto copy = arr; //arr's (and copy's) reference counts are 
both 2

arr = RCArray!int([]); // There is another owner, so arr
   // forgets about the old payload
} // Last owner of the array ('copy') gets destroyed and 
happily

  // frees the payload.
val = 3; // Oops.
}

Here, adding `restrict` to foo's parameters like so :

void foo(restrict ref RCArray!int arr, restrict ref int val)

would make the compiler statically enforce the fact that 
neither references are pointing to the same data. This would 
cause an error in main, since arr[0] is from the same block of 
memory as arr.


How does the compiler know which member of RCArray!int to check 
for pointing to the same memory chunk as val?


Re: C++ / Why Iterators Got It All Wrong

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d
On Sunday, 3 September 2017 at 02:08:20 UTC, Ilya Yaroshenko 
wrote:
On Tuesday, 29 August 2017 at 12:50:08 UTC, Robert M. Münch 
wrote:
Maybe of interest: 
https://www.think-cell.com/en/career/talks/iterators/#1


I haven't read everything, so not sure if it worth to take a 
look.


Iterators has no proper alternative when one need to implement 
generic tensor library like Mir Algorithm [1].


Out of curiosity: Could you elaborate on what the issues are with 
using a range based API internally (if it's performance, the 
"why")?


Re: Bug in D!!!

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta 
wrote:
On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
wrote:
On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
wrote:
I've love being able to inherit and override generic 
functions in C#. Unfortunately C# doesn't use templates and 
I hit so many other issues where Generics just suck.


I don't think it is appropriate to dismiss the need for the 
compiler to generate a virtual function for every 
instantiated T, after all, the compiler can't know you have 
a finite known set of T unless you tell it.


But lets assume we've told the compiler that it is compiling 
all the source code and it does not need to compile for 
future linking.


First the compiler will need to make sure all virtual 
functions can be generated for the derived classes. In this 
case the compiler must note the template function and 
validate all derived classes include it. That was easy.


Next up each instantiation of the function needs a new 
v-table entry in all derived classes. Current compiler 
implementation will compile each module independently of 
each other; so this feature could be specified to work 
within the same module or new semantics can be written up of 
how the compiler modifies already compiled modules and those 
which reference the compiled modules (the object sizes would 
be changing due to the v-table modifications)


With those three simple changes to the language I think that 
this feature will work for every T.


Specifying that there will be no further linkage is the same 
as making T finite. T must be finite.


C# uses generics/IR/CLR so it can do things at run time that 
is effectively compile time for D.


By simply extending the grammar slightly in an intuitive way, 
we can get the explicit finite case, which is easy:


foo(T in [A,B,C])()

and possibly for your case

foo(T in )() would work

or

foo(T in )()

the `in` keyword makes sense here and is not used nor 
ambiguous, I believe.


While I agree that `in` does make sense for the semantics 
involved, it is already used to do a failable key lookup 
(return pointer to value or null if not present) into an 
associative array [1] and input contracts. It wouldn't be 
ambiguous AFAICT, but having a keyword mean three different 
things depending on context would make the language even more 
complex (to read).


Yes, but they are independent, are they not? Maybe not.

foo(T in Typelist)()

in, as used here is not a input contract and completely 
independent. I suppose for arrays it could be ambiguous.


The contexts being independent of each other doesn't change that 
we would still be overloading the same keyword with three vastly 
different meanings. Two is already bad enough imho (and if I had 
a good idea with what to replace the "in" for AA's I'd propose 
removing that meaning).




For me, and this is just me, I do not find it ambiguous. I 
don't find different meanings ambiguous unless the context 
overlaps. Perceived ambiguity is not ambiguity, it's just 
ignorance... which can be overcome through learning. Hell, D 
has many cases where there are perceived ambiguities... as do 
most things.


It's not about ambiguity for me, it's about readability. The more 
significantly different meanings you overload some keyword - or 
symbol, for that matter - with, the harder it becomes to read.




But in any case, I could care less about the exact syntax. It's 
just a suggestion that makes the most logical sense with regard 
to the standard usage of in. If it is truly unambiguous then it 
can be used.


Well, yes, as I wrote, I think it is unambiguous (and can thus be 
used), I just think it shouldn't be used.




Another alternative is

foo(T of Typelist)

which, AFAIK, of is not used in D and even most programming 
languages. Another could be


foo(T -> Typelist)

or even

foo(T from Typelist)


I would much rather see it as a generalization of existing 
template specialization syntax [1], which this is t.b.h. just a 
superset of (current syntax allows limiting to exactly one, you 
propose limiting to 'n'):


---
foo(T: char)  // Existing syntax: Limit T to the single type 
`char`

foo(T: (A, B, C)) // New syntax:  Limit T to one of A, B, or C
---

Strictly speaking, this is exactly what template specialization 
is for, it's just that the current one only supports a single 
type instead of a set of types.
Looking at the grammar rules, upgrading it like this is a fairly 
small change, so the cost there should be minimal.




or whatever. Doesn't really matter. They all mean the same to 
me once the definition has been written in stone. Could use 
`foo(T eifjasldj Typelist)` for all I care.


That's okay, but it does matter to me.

The import thing for me is that such a simple syntax exists 
rather than the "complex syntax's" that have already been 
given(which are ultimately syntax's as 

Re: nested module problem

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 23:02:18 UTC, Moritz Maxeiner 
wrote:
On Saturday, 2 September 2017 at 21:56:15 UTC, Jean-Louis Leroy 
wrote:

[...]

Hmmm I see...I was thinking of spinning the runtime part of my 
openmethods library into its own module (like here 
https://github.com/jll63/openmethods.d/tree/split-runtime/source/openmethods) but it looks like a bad idea...


Why does it look like a bad idea (I don't see an immediate 
issue the module structure either way)?


* in the module structure


Re: nested module problem

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 21:56:15 UTC, Jean-Louis Leroy 
wrote:

[...]

Hmmm I see...I was thinking of spinning the runtime part of my 
openmethods library into its own module (like here 
https://github.com/jll63/openmethods.d/tree/split-runtime/source/openmethods) but it looks like a bad idea...


Why does it look like a bad idea (I don't see an immediate issue 
the module structure either way)?


Re: nested module problem

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 21:24:19 UTC, Jean-Louis Leroy 
wrote:
On Saturday, 2 September 2017 at 20:48:22 UTC, Moritz Maxeiner 
wrote:
So the compiler wants you to import it by the name it has 
inferred for you (The fix being either specifying the module 
name in foo/bar.d as `module foo.bar`, or importing it as via 
`import bar;` in foo.d).

[1] https://dlang.org/spec/module.html


I thought of doing that, it merely changed the error. OK now I 
have:


in foo.d:
module foo;
import foo.bar;

in foo/bar.d:
module foo.bar;

$ dmd -c foo.d foo/bar.d
foo/bar.d(1): Error: package name 'foo' conflicts with usage as 
a module name in file foo.d


If I compile separately:
jll@ORAC:~/dev/d/tests/modules$ dmd -I. -c foo.d
foo/bar.d(1): Error: package name 'foo' conflicts with usage as 
a module name in file foo.d


Yes, these now both fail because you cannot have a module `foo` 
and a package `foo` at the same time (they share a namespace), I 
forgot about that.



jll@ORAC:~/dev/d/tests/modules$ dmd -I. -c foo/bar.d


(same as before, no issue here)



It believes that 'foo' is a package...because there is a 'foo' 
directory?


You created the 'foo' package by specifying `module foo.bar` in 
foo/bar.d.




I see that a workaround is to move foo.d to foo/package.d but I 
would like to avoid that.


AFAIK you can't; consider:

-- baz.d ---
import foo;


in the same directory as foo.d. If foo/package.d exists (with 
`module foo` inside), what should baz.d import? foo.d or 
foo/package.d?
The point being that we could have either used foo/package.d or 
foo.d for a package file, but not both (as that would allow 
ambiguity) and package.d was chosen.


[1] https://dlang.org/spec/module.html#package-module


Re: C++ / Why Iterators Got It All Wrong

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d
On Saturday, 2 September 2017 at 20:17:40 UTC, Robert M. Münch 
wrote:

On 2017-08-31 07:13:55 +, drug said:

Interesting. How is it comparable with iterators and ranges 
ideas?


Well, see: 
http://www.rebol.com/docs/core23/rebolcore-6.html#section-6


Thanks for your post about Rebol, I didn't know it before.
After reading through the series chaper, though, AFAICT Rebol 
series *are* iterators (begin+end), just with really nice, 
functional (read: LISP) syntax?


Re: Bug in D!!!

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta 
wrote:
On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips 
wrote:
I've love being able to inherit and override generic functions 
in C#. Unfortunately C# doesn't use templates and I hit so 
many other issues where Generics just suck.


I don't think it is appropriate to dismiss the need for the 
compiler to generate a virtual function for every instantiated 
T, after all, the compiler can't know you have a finite known 
set of T unless you tell it.


But lets assume we've told the compiler that it is compiling 
all the source code and it does not need to compile for future 
linking.


First the compiler will need to make sure all virtual 
functions can be generated for the derived classes. In this 
case the compiler must note the template function and validate 
all derived classes include it. That was easy.


Next up each instantiation of the function needs a new v-table 
entry in all derived classes. Current compiler implementation 
will compile each module independently of each other; so this 
feature could be specified to work within the same module or 
new semantics can be written up of how the compiler modifies 
already compiled modules and those which reference the 
compiled modules (the object sizes would be changing due to 
the v-table modifications)


With those three simple changes to the language I think that 
this feature will work for every T.


Specifying that there will be no further linkage is the same as 
making T finite. T must be finite.


C# uses generics/IR/CLR so it can do things at run time that is 
effectively compile time for D.


By simply extending the grammar slightly in an intuitive way, 
we can get the explicit finite case, which is easy:


foo(T in [A,B,C])()

and possibly for your case

foo(T in )() would work

or

foo(T in )()

the `in` keyword makes sense here and is not used nor 
ambiguous, I believe.


While I agree that `in` does make sense for the semantics 
involved, it is already used to do a failable key lookup (return 
pointer to value or null if not present) into an associative 
array [1] and input contracts. It wouldn't be ambiguous AFAICT, 
but having a keyword mean three different things depending on 
context would make the language even more complex (to read).


W.r.t. to the idea in general: I think something like that could 
be valuable to have in the language, but since this essentially 
amounts to syntactic sugar (AFAICT), but I'm not (yet) convinced 
that with `static foreach` being included it's worth the cost.


[1] https://dlang.org/spec/expression.html#InExpression


Re: nested module problem

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn
On Saturday, 2 September 2017 at 20:03:48 UTC, Jean-Louis Leroy 
wrote:

So I have:

jll@ORAC:~/dev/d/tests/modules$ tree
.
├── foo
│   └── bar.d
└── foo.d

foo.d contains:
import foo.bar;

bar.d is empty.


This means bar.d's module name will be inferred by the compiler 
[1], which will ignore the path you put it under, yielding the 
module name "bar", not "foo.bar" (one of the issues of doing 
otherwise would be how the compiler should know at which path 
depth the inference should start - and any solution to that other 
than simply ignoring the path would be full of special cases):


Modules have a one-to-one correspondence with source files. 
The module name is, by default, the file name with the path 
and extension stripped off, and can be set explicitly with the 
module declaration.




Now I try compiling:
jll@ORAC:~/dev/d/tests/modules$ dmd -c foo.d


This looks like a compiler bug to me (accepts invalid), though 
I'm not certain.



jll@ORAC:~/dev/d/tests/modules$ dmd -c foo/bar.d


(No issue here, just an empty module being compiled separately)



So far so good. Now I try it the way dub does it:
jll@ORAC:~/dev/d/tests/modules$ dmd -c foo.d foo/bar.d
foo.d(1): Error: module bar from file foo/bar.d must be 
imported with 'import bar;'


What's up?


This doesn't work, because of the inferred module name for 
foo/bar.d being "bar".
So the compiler wants you to import it by the name it has 
inferred for you (The fix being either specifying the module name 
in foo/bar.d as `module foo.bar`, or importing it as via `import 
bar;` in foo.d).

[1] https://dlang.org/spec/module.html


Re: string to character code hex string

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 20:02:37 UTC, bitwise wrote:
On Saturday, 2 September 2017 at 18:28:02 UTC, Moritz Maxeiner 
wrote:


In UTF8:

--- utfmangle.d ---
void fun_ༀ() {}
pragma(msg, fun_ༀ.mangleof);
---

---
$ dmd -c utfmangle.d
_D6mangle7fun_ༀFZv
---

Only universal character names for identifiers are allowed, 
though, as per [1]


[1] https://dlang.org/spec/lex.html#identifiers


What I intend to do is this though:

void fun(string s)() {}
pragma(msg, fun!"ༀ".mangleof);

which gives:
_D7mainMod21__T3funVAyaa3_e0bc80Z3funFNaNbNiNfZv

where "e0bc80" is the 3 bytes of "ༀ".


Interesting, I wasn't aware of that (though after thinking about 
it, it does make sense, as identifiers can only have visible 
characters in them, while a string could have things such as 
control characters inside), thanks! That behaviour is defined 
here [1], btw (the line `CharWidth Number _ HexDigits`).


[1] https://dlang.org/spec/abi.html#Value


Re: Using closure causes GC allocation

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 18:59:30 UTC, Vino.B wrote:
On Saturday, 2 September 2017 at 18:32:55 UTC, Moritz Maxeiner 
wrote:

On Saturday, 2 September 2017 at 18:08:19 UTC, vino.b wrote:
On Saturday, 2 September 2017 at 18:02:06 UTC, Moritz 
Maxeiner wrote:

On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote:

[...]


Line 25 happens because of `[a.name]`. You request a new 
array: the memory for this has to be allocated (the reason 
why the compiler says "may" is because sometimes, e.g. if 
the array literal itself contains only literals, the 
allocations needn't happen at runtime and no GC call is 
necessary). Since you don't actually use the array, get rid 
of it:


[...]


Hi,

  Thank you for your help and the DMD version that i am using 
is DMD 2.076.0 and yes I am on windows.


Please post a compilable, minimal example including how that 
function gets called that yields you that compiler output.


Hi,

 Please find the example code below,

[...]


Cannot reproduce under Linux with dmd 2.076.0 (with commented out 
Windows-only check). I'll try to see what happens on Windows once 
I have a VM setup.




Another similar issue :
 I removed the [a.name] and the issue in line 25 has resolved, 
but for another function i am getting the same error


string[][] cleanFiles(string FFs, string Step) {
	auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isFile).map!(a =>[a.name , a.timeCreated.toSimpleString[0 .. 
20]]).array;  -> Issue in this line

if (Step == "run")
dFiles.each!(a => a[0].remove);
return dFiles;
}

if the replace the line in error as below then i am getting the 
error "Error: cannot implicitly convert expression dFiles of 
type Tuple!(string, string)[] to string[][]"


auto dFiles = dirEntries(FFs, SpanMode.shallow).filter!(a => 
a.isFile).map!(a => tuple(a.name, 
a.timeCreated.toSimpleString[0 .. 20])).array;


You changed the type of dFiles, which you return from cleanFiles, 
without changing the return type of cleanFiles. Change the return 
type of cleanFiles to the type the compiler error above tells you 
it should be (`Tuple!(string, string)[]` instead of 
`string[][]`), or let the compiler infer it via auto (`auto 
cleanFiles(...`).


Re: Using closure causes GC allocation

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 18:08:19 UTC, vino.b wrote:
On Saturday, 2 September 2017 at 18:02:06 UTC, Moritz Maxeiner 
wrote:

On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote:

[...]


Line 25 happens because of `[a.name]`. You request a new 
array: the memory for this has to be allocated (the reason why 
the compiler says "may" is because sometimes, e.g. if the 
array literal itself contains only literals, the allocations 
needn't happen at runtime and no GC call is necessary). Since 
you don't actually use the array, get rid of it:


[...]


Hi,

  Thank you for your help and the DMD version that i am using 
is DMD 2.076.0 and yes I am on windows.


Please post a compilable, minimal example including how that 
function gets called that yields you that compiler output.


Re: string to character code hex string

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 18:07:51 UTC, bitwise wrote:
On Saturday, 2 September 2017 at 17:45:30 UTC, Moritz Maxeiner 
wrote:


If this (unnecessary waste) is of concern to you (and from the 
fact that you used ret.reserve I assume it is), then the easy 
fix is to use `sformat` instead of `format`:




Yes, thanks. I'm going to go with a variation of your approach:

private
string toAsciiHex(string str)
{
import std.ascii : lowerHexDigits;
import std.exception: assumeUnique;

auto ret = new char[str.length * 2];
int i = 0;

foreach(c; str) {
ret[i++] = lowerHexDigits[(c >> 4) & 0xF];
ret[i++] = lowerHexDigits[c & 0xF];
}

return ret.assumeUnique;
}


If you never need the individual character function, that's 
probably the best in terms of readability, though with a decent 
compiler, that and the two functions one should result in the 
same opcode (except for bitshift swap).




I'm not sure how the compiler would mangle UTF8, but I intend 
to use this on one specific function (actually the 100's of 
instantiations of it).


In UTF8:

--- utfmangle.d ---
void fun_ༀ() {}
pragma(msg, fun_ༀ.mangleof);
---

---
$ dmd -c utfmangle.d
_D6mangle7fun_ༀFZv
---

Only universal character names for identifiers are allowed, 
though, as per [1]


[1] https://dlang.org/spec/lex.html#identifiers



Re: Using closure causes GC allocation

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 17:43:08 UTC, Vino.B wrote:

Hi All,

   Request your help on how to solve the issue in the below 
code as when i execute the program with -vgc it state as below:


NewTD.d(21): vgc: using closure causes GC allocation
NewTD.d(25): vgc: array literal may cause GC allocation

void logClean (string[] Lglst, int LogAge) {   //Line 21
if (!Lglst[0].exists) { mkdir(Lglst[0]); }
auto ct1 = Clock.currTime();
auto st1 = ct1 + days(-LogAge);
	auto dFiles = dirEntries(Lglst[0], SpanMode.shallow).filter!(a 
=> a.exists && a.isFile && a.timeCreated < st1).map!(a 
=>[a.name]).array;  // Line 25

  dFiles.each!(f => f[0].remove);
}


Line 25 happens because of `[a.name]`. You request a new array: 
the memory for this has to be allocated (the reason why the 
compiler says "may" is because sometimes, e.g. if the array 
literal itself contains only literals, the allocations needn't 
happen at runtime and no GC call is necessary). Since you don't 
actually use the array, get rid of it:


---
void logClean (string[] Lglst, int LogAge) {   //Line 21
if (!Lglst[0].exists) { mkdir(Lglst[0]); }
auto ct1 = Clock.currTime();
auto st1 = ct1 + days(-LogAge);
	auto dFiles = dirEntries(Lglst[0], SpanMode.shallow).filter!(a 
=> a.exists && a.isFile && a.timeCreated < st1).array;  // Line 25

  dFiles.each!(f => f.remove);
}
---

I cannot reproduce the line 21 report, though.
Since you use `timeCreated` I assume you're on Windows, but 
what's your D compiler, which D frontend version are you using, 
etc. (all the things needed to attempt to reproduce the error).


Re: string to character code hex string

2017-09-02 Thread Moritz Maxeiner via Digitalmars-d-learn

On Saturday, 2 September 2017 at 16:23:57 UTC, bitwise wrote:

On Saturday, 2 September 2017 at 15:53:25 UTC, bitwise wrote:

[...]


This seems to work well enough.

string toAsciiHex(string str)
{
import std.array : appender;

auto ret = appender!string(null);
ret.reserve(str.length * 2);
foreach(c; str) ret.put(format!"%x"(c));
return ret.data;
}


Note: Each of those format calls is going to allocate a new 
string, followed by put copying that new string's content over 
into the appender, leaving you with \theta(str.length) tiny 
memory chunks that aren't used anymore for the GC to eventually 
collect.


If this (unnecessary waste) is of concern to you (and from the 
fact that you used ret.reserve I assume it is), then the easy fix 
is to use `sformat` instead of `format`:


---
string toHex(string str)
{
import std.format : sformat;
import std.exception: assumeUnique;

auto   ret = new char[str.length * 2];
size_t len;

foreach (c; str)
{
auto slice = sformat!"%x"(ret[len..$], c);
//auto slice = toHex(ret[len..$], c);
assert (slice.length <= 2);
len += slice.length;
}

return ret[0..len].assumeUnique;
}
---

If you want to cut out the format import entirely, notice the 
`auto slice = toHex...` line, which can be implemented like this 
(always returns two chars):


---
char[] toHex(char[] buf, char c)
{
import std.ascii : lowerHexDigits;

assert (buf.length >= 2);
buf[0] = lowerHexDigits[(c & 0xF0) >> 4];
buf[1] = lowerHexDigits[c & 0x0F];

return buf[0..2];
}
---


Re: gcd with doubles

2017-09-01 Thread Moritz Maxeiner via Digitalmars-d-learn

On Friday, 1 September 2017 at 09:33:08 UTC, Alex wrote:
On Sunday, 27 August 2017 at 23:13:24 UTC, Moritz Maxeiner 
wrote:

On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote:

[...]


To expand on the earlier workaround: You can also adapt a 
floating point to string algorithm in order to dynamically 
determine an upper bound on the number of after decimal point 
digits required. Below is an untested adaption of the 
reference C implementation of errol0[1] for that purpose (MIT 
license as that is what the original code is under):


[...]


Hey, cool!
Thanks for the efforts :)


No problem, two corrections to myself, though:
1) It's a lower bound, not an upper bound (you need at least that 
many digits in order to not lose precision)
2) The code is missing `_ > ulong.min` checks along the existing 
`_ < ulong.max` checks


Re: Editor recommendations for new users.

2017-08-31 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 31 August 2017 at 23:20:52 UTC, Jerry wrote:
On Wednesday, 30 August 2017 at 22:42:40 UTC, Moritz Maxeiner 
wrote:

On Wednesday, 30 August 2017 at 21:30:44 UTC, Jerry wrote:
The install requirement is arbitrary, and why 20MB? It just 
seems like you are trying to advertise that program for some 
reason.


Because of the programs recommended until that post nothing 
was below that while meeting the other requirements (there 
were others in the same range, vim being one). The (later) 
DlangIDE recommendation, however, lowered that to about ~5MB 
(beating both my recommendation and vim in the process).


It's one of the most useless requirements in that list though.


That depends on OP's use case.

The only reason people mention install size is to boast about 
it.


I disagree.

I think he just didn't want to install something like Visual 
Studio which takes 10+ GB.


I don't know and don't want to speculate. My personal implicit 
assumption is only that as this is the general NG, not the learn 
NG, that OP has good reasons as to why that's a requirement (on 
the learn NG I would've asked for the reasons first before 
recommending anything myself, though that's beside the point).




It is relevant, shit, even with a shitty laptop you can 
upgrade the hdd and then it becomes a non-issue anyways.


Your argument implicitly assumed a specific reason (albeit a 
generally sensible one) as to why low install size was a 
(must) requirement (physical storage limitations being only 
one possible reason; shared devices with fixed disk quotas or 
devices owned by the university with certain policies being 
other possibilities). That is why I didn't (and don't) think 
it as relevant to the specific point about being as low as 
possible I was making.


Fancy way of agreeing with me, not sure what you are even going 
on about anymore if you agree.


I provided an explanation why I dismissed your argument as 
irrelevant to the point I was making. That does not mean I agree 
with you.


Re: Symbols missing, unmangle!

2017-08-31 Thread Moritz Maxeiner via Digitalmars-d

On Thursday, 31 August 2017 at 14:51:59 UTC, Mike Wey wrote:

On 30-08-17 23:51, Moritz Maxeiner wrote:
2) Try to get demangling of D symbols into upstream of the 
currently common linkers (GNU linker, gold, lld, etc.)


The GNU linker and gold support demangling D symbols, so if you 
are on linux try adding `-L--demangle=dlang` to the dmd 
commandline.


Neat, thanks!


Re: Output range with custom string type

2017-08-31 Thread Moritz Maxeiner via Digitalmars-d-learn

On Thursday, 31 August 2017 at 07:06:26 UTC, Jacob Carlborg wrote:

On 2017-08-29 19:35, Moritz Maxeiner wrote:


 void put(T t)
 {
     if (!store)
     {
     // Allocate only once for "small" vectors
     store = alloc.makeArray!T(8);
     if (!store) onOutOfMemoryError();
     }
     else if (length == store.length)
     {
     // Growth factor of 1.5
     auto expanded = alloc.expandArray!char(store, 
store.length / 2);

     if (!expanded) onOutOfMemoryError();
     }
     assert (length < store.length);
     moveEmplace(t, store[length++]);
 }


What's the reason to use "moveEmplace" instead of just 
assigning to the array: "store[length++] = t" ?


The `move` part is to support non-copyable types (i.e. T with 
`@disable this(this)`), such as another owning container 
(assigning would generally try to create a copy).
The `emplace` part is because the destination `store[length]` has 
been default initialized either by makeArray or expandArray and 
it doesn't need to be destroyed (a pure move would destroy 
`store[length]` if T has a destructor).


Re: Editor recommendations for new users.

2017-08-30 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 30 August 2017 at 21:30:44 UTC, Jerry wrote:
On Sunday, 27 August 2017 at 18:08:52 UTC, Moritz Maxeiner 
wrote:
The requirements are rather vague, you can interpret it in a 
number of ways.


The sensible interpretation imho is "as low an install 
footprint as possible while still fulfilling the other 
requirements". I'm not aware of anything below ~20MB install 
footprint that fulfills the other requirements, but I'd be 
interested if you know any.


The install requirement is arbitrary, and why 20MB? It just 
seems like you are trying to advertise that program for some 
reason.


Because of the programs recommended until that post nothing was 
below that while meeting the other requirements (there were 
others in the same range, vim being one). The (later) DlangIDE 
recommendation, however, lowered that to about ~5MB (beating both 
my recommendation and vim in the process).




I wouldn't consider 200MB gigantic in comparison to 20MB 
cause there is literally no difference of use for me.


The thread is about OP's requirements.


So replace me with anyone.

You'd have to have a really shitty laptop for it to be an 
issue.


Not relevant.


It is relevant, shit, even with a shitty laptop you can upgrade 
the hdd and then it becomes a non-issue anyways.


Your argument implicitly assumed a specific reason (albeit a 
generally sensible one) as to why low install size was a (must) 
requirement (physical storage limitations being only one possible 
reason; shared devices with fixed disk quotas or devices owned by 
the university with certain policies being other possibilities). 
That is why I didn't (and don't) think it as relevant to the 
specific point about being as low as possible I was making.


Re: Symbols missing, unmangle!

2017-08-30 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 30 August 2017 at 20:23:18 UTC, Johnson Jones wrote:
It would be nice if, when symbols are missing, they are 
unmangled!


Error 42: Symbol Undefined 
_D12mMunchhousin12iMunchhousin11__T4GoTsZ4GoMFS12mMunchhousin18__T10MunchhousinTsZ10sMunchhousinfE12mMunchhousin9eGoffZv (void Munchhousin.Munchhousin.Go!(short).Go()


Since that's a linker error and there are a multitude of linkers 
that a person could want to use after a D compiler this is a 
non-trivial issue in general.

Options to tackle this include:
1) Have a D compiler capture the linker output and demangle it
2) Try to get demangling of D symbols into upstream of the 
currently common linkers (GNU linker, gold, lld, etc.)
3) Integrate a (FLOSS) cross platform linker into dmd's backend 
at the source code level, with support for such demangling (and 
drop OPTLINK)
4) Fork a (FLOSS) cross platform linker for use with D, add such 
support, and distribute a binary of it with dmd's binary 
distribution (and drop OPTLINK)


I'm not proposing any of these are what should be done, I've 
listed them more as an example that something like this would 
require extensive discussion.


[1] https://github.com/ldc-developers/ldc/releases/tag/v1.3.0


Re: DIP 1009--Improve Contract Usability--Formal Review

2017-08-30 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 30 August 2017 at 14:05:40 UTC, Mark wrote:

[...]

int abs(int x)
out(_ >= 0)
{
return x>0 ? x : -x;
}


The ambiguity issue of having two results in one scope [1] 
applies.


[1] http://forum.dlang.org/post/oihbot$134s$1...@digitalmars.com


Re: Output range with custom string type

2017-08-29 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 29 August 2017 at 09:59:30 UTC, Jacob Carlborg wrote:

[...]

But if I keep the range internal, can't I just do the 
allocation inside the range and only use "formattedWrite"? 
Instead of using both formattedWrite and sformat and go through 
the data twice. Then of course the final size is not known 
before allocating.


Certainly, that's what dynamic arrays (aka vectors, e.g. 
std::vector in C++ STL) are for:


---
import core.exception;

import std.stdio;
import std.experimental.allocator;
import std.algorithm;

struct PoorMansVector(T)
{
private:
T[]store;
size_t length;
IAllocator alloc;
public:
@disable this(this);
this(IAllocator alloc)
{
this.alloc = alloc;
}
~this()
{
if (store)
{
alloc.dispose(store);
store = null;
}
}
void put(T t)
{
if (!store)
{
// Allocate only once for "small" vectors
store = alloc.makeArray!T(8);
if (!store) onOutOfMemoryError();
}
else if (length == store.length)
{
// Growth factor of 1.5
			auto expanded = alloc.expandArray!char(store, store.length / 
2);

if (!expanded) onOutOfMemoryError();
}
assert (length < store.length);
moveEmplace(t, store[length++]);
}
char[] release()
{
auto elements = store[0..length];
store = null;
return elements;
}
}

char[] sanitize(string value, IAllocator alloc)
{
import std.format : formattedWrite, sformat;

auto r = PoorMansVector!char(alloc);
().formattedWrite!"'%s'"(value); // do not copy the range
return r.release();
}

void main()
{
auto s = sanitize("foo", theAllocator);
scope (exit) theAllocator.dispose(s);
writeln(s);
}
---

Do be aware that the above vector is named "poor man's vector" 
for a reason, that's a hasty write down from memory and is sure 
to contain bugs.
For better vector implementations you can use at collection 
libraries such as EMSI containers; my own attempt at a DbI vector 
container can be found here [1]


[1] 
https://github.com/Calrama/libds/blob/6a1fc347e1f742b8f67513e25a9fdbf79f007417/src/ds/vector.d


Re: Editor recommendations for new users.

2017-08-29 Thread Moritz Maxeiner via Digitalmars-d

On Tuesday, 29 August 2017 at 14:05:13 UTC, Ryion wrote:
On Monday, 28 August 2017 at 21:17:19 UTC, Moritz Maxeiner 
wrote:

Why "again"? You've not stated so before AFAICT.
Regardless, I disagree that discussing the validity of 
recommendations in a thread specifically made to gather such 
recommendations is a distraction from the topic; I would 
contend that it lies at the heart of the topic.


The poster asked for programs that fit his (vague) criteria, it 
is NOT up to you to determine what those criteria are


We're repeating ourselves here, so we're going to have to agree 
to disagree, as I don't agree that that's what I was doing.


and then belittle people there posts that try to help out with 
there own recommendations. The fact that you can not see this 
even now, really is a issue.


I don't consider the way I argue to be belittling and I resent 
the accusation.

Side point: DlangIDE invalidates my recommendation, as well



And i am not referring to this topic alone or those that i 
personally post in. There are many where the same patterns are 
viable and i notice the pattern, that its always your name next 
to those posts.


Is it so hard for you to not always override topics here and 
constant "straw man" or other terms calling.


I have to point out that when I attribute "straw man" to a quote, 
it's because the author of that quote has responded to something 
I wrote, but argued against a point that I did not make, which is 
a logical fallacy. The same applies to other such fallacies such 
as "red herring" and if you do catch me in one, I do hope you 
point it out, as it is hard to see when one is committing one 
oneself.


And i use this term because because you constantly write 
"irrelevant", "straw man argumentation", "but I don't care" and 
other belittling statements that seem to indicate that your 
opinion means more then others.


I don't see how pointing out logical fallacies constitutes 
belittling (again, please do point them out if you catch me in 
one).
W.r.t. the "I don't care" (I assume you refer to the website 
thread): If I perceive someone trying to engage me in a topic I 
have no interest in after I've commented about general procedure 
(which applies to the topic being turned from idea to tangible 
result) I can either ignore them, or point out that it doesn't 
interest me. I consider the first option to be ruder.
Lastly the "irrelevant": If someone disagrees with me dismissing 
their argument like that I welcome a counter argument as to why 
they do consider it relevant to the point I was making in the 
quote they replied to.


Or how you supposedly do not care and have no issue pointing it 
out half a dozen times.


I pointed it out again when despite earlier comment(s) on the 
subject the attempt to engage me in it was made again.




It gets very fast tiresome. You are the only poster that i see 
here that is non-stop doing this. If you do not like something 
or find it irrelevant, then do not respond to it.


I generally don't; if someone responds either to me, or posts in 
a discussion I've joined, that's another matter, though.


But they way you act, like posts are below or irrelevant to 
you...


If they were I wouldn't take the time to respond.
I point these things in responses to me out because I hope for a 
reply containing an actual counter argument to the point I was 
making.




This is the "again" i refer to. You do this is a lot of topics. 
You dissect people there posts and write how it is irrelevant 
to you or some other clever looking down terminology. It 
totally distracts from the topic at hand and frankly, makes 
people less likely to continue topics.


I strongly disagree that pointing out logical fallacies distracts 
from the topic at hand, because that's what logical fallacies do.
W.r.t. post dissection: Addressing individual points allows the 
exchange of specific arguments and counter arguments.




Its this kind of attitude that in MY personal opinion makes 
this mailing board toxic for new users. While you are not 
impolite, the way you act upon people the posts makes it hard 
to have a honest discussion with you without it turning 
off-topic or simply scaring away people.


I'm not sure if you're making the point that you want to write 
things to me that you don't want to expose others to, or that you 
don't feel that you can have a discussion with me on account of 
how I write. For the former: You can send me a private email. For 
the latter: The best I can do is assure you that I'll refrain 
from responding to you first in a thread (unless there are 
exceptional circumstances); if you respond to me, that's another 
matter.




So again polity again, to refrain from acting like this and let 
people have there own opinion without you dissecting every 
piece.


Again, if someone replies to me with a logical fallacy, I will 
point that out; the same way I would expect them to point it out 
if I were to do it.
I will also address the 

Re: Accessing outer class attribute from inner struct

2017-08-29 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 29 August 2017 at 07:59:40 UTC, Andre Pany wrote:
On Monday, 28 August 2017 at 23:12:40 UTC, Moritz Maxeiner 
wrote:


In both cases S doesn't inherently how about C, which means a 
solution using default initialization is not feasible, as 
S.init can't know about any particular instance of C.
I don't think there's any way for you to avoid using a class 
constructor.


Thanks for the explanation. I now tried to use a class and use 
a static opIndex. But it seems from a static method you also 
cannot access the attributes of a outer class :)


A nested class' outer property (when nested inside another class) 
is a class reference, which means we not only require a class 
instance of the outer class to reference, but also a class 
instance of the nested class to store said class reference to the 
other class in.
A static class method (by definition) is invoked without a class 
instance.

The two are inherently incompatible.


[...]

This seems like an unnecessary limitation...


I can only recommend reading the language specification w.r.t, 
nested classes [1] if it seems that way to you, because it is not.


[1] https://dlang.org/spec/class.html#nested



Re: C callbacks getting a value of 0! Bug in D?

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 29 August 2017 at 02:47:34 UTC, Johnson Jones wrote:

[...]

Seems only long and ulong are issues.


With respect to the currently major platforms you can reasonable 
expect software to run on, yes.
Just don't try to use D on something with e.g. 32 bit C shorts 
unless you bind to it via c_short.


Re: D Tour is down

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 28 August 2017 at 23:57:01 UTC, Mengu wrote:

On Monday, 28 August 2017 at 17:16:59 UTC, Mengu wrote:
On Monday, 28 August 2017 at 08:19:10 UTC, Petar Kirov 
[ZombineDev] wrote:

On Monday, 28 August 2017 at 07:52:00 UTC, Joakim wrote:

On Monday, 28 August 2017 at 07:44:48 UTC, Wulfklaue wrote:

On Sunday, 27 August 2017 at 22:27:45 UTC, Mengu wrote:
d tour page is down for at least a week now. someone 
please fix that.


thanks.


Seems to be active for me ...


It shows a blank page for me.  Also, the wiki seems really 
slow nowadays.


Can you try again? I think that if there was a problem, it is 
gone now.


it works on my android phone rn. i'll post if it doesn't work 
on the mac.


on mac, with chrome version 60.0.3112.90 (64-bit), it renders 
an empty page.


Do you have some script blocking enabled? Because the tour needs 
to be able to load from ajax.googleapis.com; if that is blocked, 
it renders an empty page (tested with uMatrix and Firefox 55.0.2).


Re: C callbacks getting a value of 0! Bug in D?

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Tuesday, 29 August 2017 at 01:34:40 UTC, Johnson Jones wrote:

[...]


produces 4 on both x86 and x64. So, I'm not sure how you are 
getting 8.


There are different 64bit data models [1] and it seems your 
platform uses LLP64, which uses 32bit longs. Am I correct in 
assuming you're on Windows (as they are the only major modern 
platform that I'm aware of that made this choice)?


[1] 
https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models


Re: Accessing outer class attribute from inner struct

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 28 August 2017 at 22:47:12 UTC, Andre Pany wrote:
On Monday, 28 August 2017 at 22:28:18 UTC, Moritz Maxeiner 
wrote:

On Monday, 28 August 2017 at 21:52:58 UTC, Andre Pany wrote:

[...]

To make my question short:) If ColumnsArray is a class I can 
access the attribute "reference" but not if it is a struct. I 
would rather prefer a struct, but with a struct

it seems I cannot access "reference".

How can I access "reference" from my inner struct?

[...]


Add an explicit class reference member to to it:
---
class TCustomGrid: TCustomPresentedScrollBox
{
struct ColumnsArray
{
TCustomGrid parent;

TColumn opIndex(int index)
{
			int r = getIntegerIndexedPropertyReference(reference, 
"Columns", index);

return new TColumn(r);
}
}

ColumnsArray Columns;

this()
{
Columns = ColumnsArray(this);
}
...
}
---

Nesting structs inside anything other than functions[1] is for 
visibility/protection encapsulation and namespacing only.


[1] non-static structs in functions are special as they have 
access to the surrounding stack frame


Unfortunately thats not possible. ColumnsArray and the 
attribute will become a string mixin to avoid boilerplate.


It would be error prone if I have to initialize them in the 
constructor too. I want just 1 single coding line for this 
property. That is also the reason I do not want to use a class, 
as I would have to initialize them in the constructor.


---
class C
{
struct S
{
}
S s;
}
---

is semantically equivalent to

---
struct S
{
}

class C
{
S s;
}
---

with the two differences being
- namespacing (outside of C one has to use C.S to access S)
- you can protect the visibility of the S from outside the module 
C resides in via private,public, etc.


In both cases S doesn't inherently how about C, which means a 
solution using default initialization is not feasible, as S.init 
can't know about any particular instance of C.
I don't think there's any way for you to avoid using a class 
constructor.


Re: C callbacks getting a value of 0! Bug in D?

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 28 August 2017 at 22:21:18 UTC, Johnson Jones wrote:
On Monday, 28 August 2017 at 21:35:27 UTC, Steven Schveighoffer 
wrote:

On 8/27/17 10:17 PM, Johnson Jones wrote:

[...]


For C/C++ interaction, always use c_... types if they are 
available. The idea is both that they will be correctly 
defined for the width, and also it will mangle correctly for 
C++ compilers (yes, long and int are mangled differently even 
when they are the same thing).


-Steve


and where are these c_ types defined? The reason I replaced 
them was precisely because D was not finding them.


core.stdc.config

, which unfortunately doesn't appear in the online documentation 
AFAICT (something that ought to be fixed).
A common workaround is to use pattern searching tools like grep 
if you know the phrase to look for:

$ grep -Er c_long /path/to/imports
, or in this case, since these things are usually done with 
aliases:

$ grep -Er 'alias\s+\w*\s+c_long' /path/to/imports


Re: Accessing outer class attribute from inner struct

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 28 August 2017 at 21:52:58 UTC, Andre Pany wrote:

[...]

To make my question short:) If ColumnsArray is a class I can 
access the attribute "reference" but not if it is a struct. I 
would rather prefer a struct, but with a struct

it seems I cannot access "reference".

How can I access "reference" from my inner struct?

[...]


Add an explicit class reference member to to it:
---
class TCustomGrid: TCustomPresentedScrollBox
{
struct ColumnsArray
{
TCustomGrid parent;

TColumn opIndex(int index)
{
			int r = getIntegerIndexedPropertyReference(reference, 
"Columns", index);

return new TColumn(r);
}
}

ColumnsArray Columns;

this()
{
Columns = ColumnsArray(this);
}
...
}
---

Nesting structs inside anything other than functions[1] is for 
visibility/protection encapsulation and namespacing only.


[1] non-static structs in functions are special as they have 
access to the surrounding stack frame


Re: Output range with custom string type

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d-learn

On Monday, 28 August 2017 at 14:27:19 UTC, Jacob Carlborg wrote:
I'm working on some code that sanitizes and converts values of 
different types to strings. I thought it would be a good idea 
to wrap the sanitized string in a struct to have some type 
safety. Ideally it should not be possible to create this type 
without going through the sanitizing functions.


The problem I have is that I would like these functions to push 
up the allocation decision to the caller. Internally these 
functions use formattedWrite. I thought the natural design 
would be that the sanitize functions take an output range and 
pass that to formattedWrite.


[...]

Any suggestions how to fix this or a better idea?


If you want the caller to be just in charge of allocation, that's 
what std.experimental.allocator provides. In this case, I would 
polish up the old "format once to get the length, allocate, 
format second time into allocated buffer" method used with 
snprintf for D:


--- test.d ---
import std.stdio;
import std.experimental.allocator;

struct CountingOutputRange
{
private:
size_t _count;
public:
size_t count() { return _count; }
void put(char c) { _count++; }
}

char[] sanitize(string value, IAllocator alloc)
{
import std.format : formattedWrite, sformat;

CountingOutputRange r;
().formattedWrite!"'%s'"(value); // do not copy the range

auto s = alloc.makeArray!char(r.count);
scope (failure) alloc.dispose(s);

// This should only throw if the user provided allocator 
returned less

// memory than was requested
return s.sformat!"'%s'"(value);
}

void main()
{
auto s = sanitize("foo", theAllocator);
scope (exit) theAllocator.dispose(s);
writeln(s);
}
--


Re: Editor recommendations for new users.

2017-08-28 Thread Moritz Maxeiner via Digitalmars-d

On Monday, 28 August 2017 at 20:48:44 UTC, Ryion wrote:
On Sunday, 27 August 2017 at 18:08:52 UTC, Moritz Maxeiner 
wrote:
It's nearly ten times the size, so yeah, it is relative to 
Textadept.


You can say the same thing in comparison with vim which is 
only a 2MB install size,

20MB in comparison is gigantic.


Indeed, but that's only the raw executable, not the full 
package (which includes things like syntax highlighting), 
which adds another 26MB.
But, yes, Textadept and vim+vim-core (Gentoo speak) are both 
gigantic required to bare bones vim. But bare bones vim 
doesn't fulfill the syntax highlighting requirement IIRC.


The requirements are rather vague, you can interpret it in a 
number of ways.


The sensible interpretation imho is "as low an install 
footprint as possible while still fulfilling the other 
requirements". I'm not aware of anything below ~20MB install 
footprint that fulfills the other requirements, but I'd be 
interested if you know any.


As the OP did not state any requirement, he can consider 2GB as 
small.


If there's nothing significantly smaller that fits the other 
requirements, yes.

As those exists, no.


Vague requirements do not invalidate the recommendation.


I don't consider the requirement to be vague if taken together 
with the other *must* requirements. On its own, I would agree 
with you.




Laptops have 1TB harddrives as good as standard.

Even on a "small" 128GB SSD, it pales in comparison to the 10GB 
that Windows alone takes. Let alone the page file, swapfile, 
hibernation file etc...


All red herrings.



I wouldn't consider 200MB gigantic in comparison to 20MB 
cause there is literally no difference of use for me.


The thread is about OP's requirements.

You'd have to have a really shitty laptop for it to be an 
issue.


Not relevant.


As the OP has not stated the size of the laptops it needs to be 
installed upon, the discussion about 180MB vs 20MB or 2MB is 
irrelevant.


Except I'm not arguing that ~20MB is small. It's just small 
compared to 180MB in this specific context as both fulfill the 
other requirements.
If I knew of a 2MB recommendation that fits the other 
requirements (such as easy to install) I would say 20MB is 
gigantic and consider my own recommendation to be invalid.


We are not talking a 4GB Visual Studio installation. And its 
160MB for the 32Bit version. :)


You say that particular discussion is irrelevant, yet you pursue 
it.




So if the OP has other requirements, HE can state them in this 
topic, instead of you making up ideas as to what YOU consider 
small.


I'm not making up any ideas about what's small in terms of a 
fixed number; I've merely argued about size in relationship to 
each other, i.e. 180MB is gigantic only in relation to the 20MB 
under the assumption that both fulfill all other requirements. 
With regards to the requirements I've stated what I consider the 
sane interpretation, but if the OP clarifies that point to a hard 
number, that would indeed be helpful.


Your comments are irrelevant without knowing the OP his 
expectations.


I consider OP's expectations to be clear from his posted 
requirements, so until OP has indeed clarified, I disagree.




So again please do not distract from the topic.


Why "again"? You've not stated so before AFAICT.
Regardless, I disagree that discussing the validity of 
recommendations in a thread specifically made to gather such 
recommendations is a distraction from the topic; I would contend 
that it lies at the heart of the topic.


Re: gcd with doubles

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote:

[..]
Is there a workaround, maybe?


To expand on the earlier workaround: You can also adapt a 
floating point to string algorithm in order to dynamically 
determine an upper bound on the number of after decimal point 
digits required. Below is an untested adaption of the reference C 
implementation of errol0[1] for that purpose (MIT license as that 
is what the original code is under):


---
void main()
{
assert(gcd(0.5, 32) == 0.5);
assert(gcd(0.2, 32) == 0.2);

assert(gcd(1.3e2, 3e-5) == 1e-5);
}

template gcd(T)
{
import std.traits : isFloatingPoint;

T gcd(T a, T b)
{
static if (isFloatingPoint!T)
{
return fgcd(a, b);
}
else
{
import std.numeric : igcd = gcd;
return igcd(a, b);
}
}

static if (isFloatingPoint!T)
{
import std.math : nextUp, nextDown, pow, abs, isFinite;
import std.algorithm : max;

T fgcd(T a, T b)
in
{
assert (a.isFinite);
assert (b.isFinite);
assert (a < ulong.max);
assert (b < ulong.max);
}
body
{
short a_exponent;
int a_digitCount = errol0CountOnly(abs(a), 
a_exponent);


short b_exponent;
int b_digitCount = errol0CountOnly(abs(b), 
b_exponent);


a_digitCount -= a_exponent;
if (a_digitCount < 0)
{
a_digitCount = 0;
}

b_digitCount -= b_exponent;
if (b_digitCount < 0)
{
b_digitCount = 0;
}

auto coeff = pow(10, max(a_digitCount, b_digitCount));
assert (a * coeff < ulong.max);
assert (b * coeff < ulong.max);
return (cast(T) euclid(cast(ulong) (a * coeff),
   cast(ulong) (b * coeff))) / 
coeff;

}

ulong euclid(ulong a, ulong b)
{
while (b != 0)
{
auto t = b;
b = a % b;
a = t;
}
return a;
}

struct HighPrecisionFloatingPoint
{
T base, offset;

void normalize()
{
T base = this.base;

this.base   += this.offset;
this.offset += base - this.base;
}

void mul10()
{
T base = this.base;

this.base   *= T(10);
this.offset *= T(10);

T offset = this.base;
offset -= base * T(8);
offset -= base * T(2);

this.offset -= offset;

normalize();
}

void div10()
{
T base = this.base;

this.base   /= T(10);
this.offset /= T(10);

base -= this.base * T(8);
base -= this.base * T(2);

this.offset += base / T(10);

normalize();
}
}
alias HP = HighPrecisionFloatingPoint;

enum epsilon = T(0.001);
ushort errol0CountOnly(T f, out short exponent)
{
ushort digitCount;

T ten = T(1);
exponent = 1;

auto mid = HP(f, T(0));

while (((mid.base > T(10)) || ((mid.base == T(10)) && 
(mid.offset >= T(0 && (exponent < 308))

{
exponent += 1;
mid.div10();
ten /= T(10);
}

while (((mid.base < T(1)) || ((mid.base == T(1)) && 
(mid.offset < T(0 && (exponent > -307))

{
exponent -= 1;
mid.mul10();
ten *= T(10);
}

auto inhi = HP(mid.base, mid.offset + (nextUp(f) - f) 
* ten / (T(2) + epsilon));
auto inlo = HP(mid.base, mid.offset + (nextDown(f) - 
f) * ten / (T(2) + epsilon));


inhi.normalize();
inlo.normalize();

while (inhi.base > T(10) || (inhi.base == T(10) && 
(inhi.offset >= T(0

{
exponent += 1;
inhi.div10();
inlo.div10();
}

while (inhi.base < T(1) || (inhi.base == T(1) && 
(inhi.offset < T(0

{
exponent -= 1;
inhi.mul10();
inlo.mul10();
}

while (inhi.base != T(0) || inhi.offset != T(0))
{
auto hdig = cast(ubyte) inhi.base;
if ((inhi.base == hdig) && (inhi.offset < T(0)))
{
hdig -= 1;
}

auto ldig = cast(ubyte) inlo.base;
if ((inlo.base == ldig) && (inlo.offset < 0))
{
ldig -= 1;

Re: gcd with doubles

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 27 August 2017 at 19:47:59 UTC, Alex wrote:

Hi, all.
Can anybody explain to me why

void main()
{
import std.numeric;
assert(gcd(0.5,32) == 0.5);
assert(gcd(0.2,32) == 0.2);
}

fails on the second assert?

I'm aware, that calculating gcd on doubles is not so obvios, as 
on integers. But if the library accepts doubles, and basically 
the return is correct occasionally, why it is not always the 
case?


If the type isn't a builtin integral and can't be bit shifted, 
the gcd algorithm falls back to using the Euclidean algorithm in 
order to support custom number types, so the above gdc in the 
above reduces to:


---
double gcd(double a, double b)
{
while (b != 0)
{
auto t = b;
b = a % b;
a = t;
}
return a;
}
---

The issue boils down to the fact that `32 % 0.2` yield `0.2` 
instead of `0.0`, so the best answer I can give is "because 
floating points calculations are approximations". I'm actually 
not sure if this is a bug in fmod or expected behaviour, but I'd 
tend to the latter.



Is there a workaround, maybe?


If you know how many digits of precision after the decimal dot 
you can multiply beforehand, gcd in integer realm, and div 
afterwards (be warned, the below is only an example 
implementation for readability, it does not do the required 
overflow checks for the double -> ulong conversion!):


---
import std.traits : isFloatingPoint;
T gcd(ubyte precision, T)(T a, T b) if (isFloatingPoint!T)
{
import std.numeric : _gcd = gcd;
immutable T coeff = 10 * precision;
return (cast(T) _gcd(cast(ulong) (a * coeff),
 cast(ulong) (b * coeff))) / coeff;
}
---


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 20:13:35 UTC, Ecstatic Coder wrote:
Following what you said, I've just looked at both Github 
accounts, and I can clearly see that it's much above my skill 
set to merge the content of both D-based websites so that the 
main page of the Dlang website looks exactly like what I want.


Fair enough, in that case could you open an enhancement request 
over at [1], so it doesn't get lost?


[1] https://issues.dlang.org/


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 19:29:03 UTC, Ryion wrote:
On Sunday, 27 August 2017 at 18:51:00 UTC, Moritz Maxeiner 
wrote:
Thanks, but as I pointed out, the website's design is of no 
interest to me personally.


As I said, you aren't going to change my interests (and I'm 
reasonable convinced you won't change other peoples', either).


Add my voice to that corpus - I honestly don't care what the 
website looks like.


This whole topic is about improving the website. The fact that 
you are already a well versed D programmer that sees no 
usefulness in actually improving the readability of the site is 
irrelevant.


That's a strawmen, since I didn't voice an opinion about it's 
usefulness. I said it doesn't interest me; what *does* interest 
me is trying to ensure that peoples' ideas don't have to die off 
in the forum, i.e. that they know (if they have the time and 
interest) what to do in order for them to be incorporated.




The constant repeating that it does not interest you, simply 
discourages people.


If you recall my initial post, all I did was point out the places 
to send a PR to if he wants the changes to be incorporated. 
W.r.t. expressing my disinterest in the particulars of the topic: 
I've only done so when I judged him to attempt to engage me about 
them in a way where just not replying would've been rude.


Same with pointing out that ( you think ) he can not change 
other people his minds.


My (apparently too implicit) point was that he shouldn't have to 
care about what I, or others, are interested in. If he cares 
about it and has the time and will to do it (which so far I've 
not seen a contrary statement of his to), the way to go is to 
make a PR, because that'll (eventually) yield an official 
reaction by the appropriate people.


I personally think he is right and the site is not information 
friendly. Lots of content does not mean its useful if that 
content is badly presented.


You don't seem to want me to express my (dis)interest in the 
particulars of the topic, yet you respond to me in a way that's 
designed to elicit that response from me. I find that confusing.




If somebody is spending a lot of time simply writing issues 
that they think can be improved, let them try. Even if it dies 
later in the topic.


Why do you imply that I'm trying to stop someone?



If this topic did not exist, i will not have found out that 
Adam has a experimental library, that hands over heels wins 
compared to the current massive text blob. Even if its a few 
versions behind, its way more clean and easy to use the what is 
now on the website.


http://dpldocs.info/experimental-docs/std.html

So at minimum one positive thing came from the topic.


That's why one of my earlier responses contained

Starting and/or participating in discussions can be valuable 
to the community


Re: Editor recommendations for new users.

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 18:14:07 UTC, Adam D. Ruppe wrote:
On Sunday, 27 August 2017 at 18:08:52 UTC, Moritz Maxeiner 
wrote:
Indeed, but that's only the raw executable, not the full 
package (which includes things like syntax highlighting), 
which adds another 26MB.
But, yes, Textadept and vim+vim-core (Gentoo speak) are both 
gigantic required to bare bones vim. But bare bones vim 
doesn't fulfill the syntax highlighting requirement IIRC.


I don't know how it is packaged on your system, but the vim 
syntax highlighting for D is like 12 KB and pretty easy to just 
drop in and use on its own.


One can definitely splice together one's own minimal vim with D 
support, but that would require more work than simply installing 
the right packages (which I assumed the requirement "simple to 
install" to exclude).
The 26MB I spoke of are localizations (manual, messages, 
keymaps), default shipped .vim files (like netrw, color schemes, 
languages, compiler support), docfiles, and vim-tutor, all of 
which are AFAIK part of the canonical vim distribution.


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 18:07:27 UTC, Ecstatic Coder wrote:
I've already received enough "No, not interested" answers 
till now to the same proposal to think that this will be ok 
this time.


Add my voice to that corpus - I honestly don't care what the 
website looks like.


Ok, message received.


Considering what follows, I'm not sure about that, but OK.


At least I've got my answer for the PR.


Not sure what you mean here, unless you've opened a PR and I 
missed it?
My personal interests have nothing to do with whether a PR would 
get accepted or not by the people in charge of the website, so I 
can't see how you got "your answer for the PR".



Thanks for your honesty. Sincerely.

Now, to be 100% honest with you, I'm still convinced it DOES 
matter, because many people who have heard about D, whether 
it's in a conference, on a blog or article or by a colleague, 
will eventually land on the main page of this website.


Then they will have to decide if they install the compiler and 
learn D, or not.


And I'm pretty sure many won't, for the reasons explained.


That a possibility, but I don't care about whether or not they do.
I care about the quality of D itself as a PL, which in my 
(heavily) biased opinion won't be impacted by people who base 
their tool choices on website design.




Now you know EXACTLY, to the smallest detail, what I would 
personally do to fix that...


So what's stopping you other than pre approval?



And yes, call it masochism, I continue proposing the change 
over and over, because I'm totally convinced that those 
changes to the dlang.org landing page are REALLY needed.


I haven't called it anything, yet, but if I were to call it 
something, it would be insanity, because I see no causal link 
between proposing the same thing repeatedly and other peoples' 
interests.


LOL. Ok so let's be insane once again... ;)

My proposal is to :

[...]


As I said, you aren't going to change my interests (and I'm 
reasonable convinced you won't change other peoples', either). 
The only reason I replied initially was so that whatever the 
prevalent idea was had a chance not to die off in the forum like 
most others who don't have a champion; the particulars of the 
topic itself weren't relevant to me.

What you do with that is up to you.



Still not convinced ?


Convinced of what, exactly?


[...]

Anyway, feel free to copy-paste the changes I've suggested, 
they are 100% free to use...


Thanks, but as I pointed out, the website's design is of no 
interest to me personally.


Re: Editor recommendations for new users.

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 16:22:44 UTC, Jerry wrote:
On Sunday, 27 August 2017 at 15:17:51 UTC, Moritz Maxeiner 
wrote:

On Sunday, 27 August 2017 at 13:15:41 UTC, Ryion wrote:
On Sunday, 27 August 2017 at 10:05:29 UTC, Nicholas Wilson 
wrote:

The following are a must:
no large install footprint


Visual Studio Code seems to be what you need.

[...]

Relative low memory footprint for the functionality ( 
compared to several IDEs that do the same ).


[...]


The (must) requirement was install footprint, not memory 
footprint, and as Visual Studio code uses the electron 
framework[1] its install footprint is gigantic (about 180MB vs 
e.g. TextAdept's 20MB).


It isn't that gigantic in comparison.


It's nearly ten times the size, so yeah, it is relative to 
Textadept.


You can say the same thing in comparison with vim which is only 
a 2MB install size,

20MB in comparison is gigantic.


Indeed, but that's only the raw executable, not the full package 
(which includes things like syntax highlighting), which adds 
another 26MB.
But, yes, Textadept and vim+vim-core (Gentoo speak) are both 
gigantic required to bare bones vim. But bare bones vim doesn't 
fulfill the syntax highlighting requirement IIRC.


The requirements are rather vague, you can interpret it in a 
number of ways.


The sensible interpretation imho is "as low an install footprint 
as possible while still fulfilling the other requirements". I'm 
not aware of anything below ~20MB install footprint that fulfills 
the other requirements, but I'd be interested if you know any.


I wouldn't consider 200MB gigantic in comparison to 20MB cause 
there is literally no difference of use for me.


The thread is about OP's requirements.


You'd have to have a really shitty laptop for it to be an issue.


Not relevant.


Re: Editor recommendations for new users.

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 13:15:41 UTC, Ryion wrote:
On Sunday, 27 August 2017 at 10:05:29 UTC, Nicholas Wilson 
wrote:

The following are a must:
no large install footprint


Visual Studio Code seems to be what you need.

[...]

Relative low memory footprint for the functionality ( compared 
to several IDEs that do the same ).


[...]


The (must) requirement was install footprint, not memory 
footprint, and as Visual Studio code uses the electron 
framework[1] its install footprint is gigantic (about 180MB vs 
e.g. TextAdept's 20MB).


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 13:12:22 UTC, Ecstatic Coder wrote:
I agree, but here it's not a local modification I've done to a 
D library that I want to push so that other people can use it 
too.


It's a change to the main landing page of the dlang.org 
website, which is by definition global and can ONLY be 
validated by those in charge of it.


Which - as I've pointed out - is much likelier to occur if you 
open a PR.




If those people in charge like the idea, then it's fine by me 
to translate the idea into physical changes through a PR.


That's your prerogative, but preapproval is extremely unlikely to 
occur.




But if not, it's a complete loss of time, and sorry to say it, 
but from what I've seen, there are VERY LITTLE chances these 
changes gets validated.


I've already received enough "No, not interested" answers till 
now to the same proposal to think that this will be ok this 
time.


Add my voice to that corpus - I honestly don't care what the 
website looks like.




And yes, call it masochism, I continue proposing the change 
over and over, because I'm totally convinced that those changes 
to the dlang.org landing page are REALLY needed.


I haven't called it anything, yet, but if I were to call it 
something, it would be insanity, because I see no causal link 
between proposing the same thing repeatedly and other peoples' 
interests.
I'm reasonably confident it would've taken you less time to do 
that PR than writing your posts on this topic and reading 
peoples' replies already took from you; the difference between 
the two being that you're no closer to getting your changes 
through right now (or even receiving a definite answer), whereas 
if you had opened the PR you could've already moved on to the 
next thing of interest to you, instead of remaining in the 
current loop of "post idea" -> "wait -> "don't get preapproval" 
-> "wait" -> ...
If you open a PR, it's likely to eventually receive a review that 
will either result in rejection, merging, or a discussion. In 
either case, you're free to pursue other things in the meantime 
and as long as the PR remains open, the changes aren't lost.




That's what the other languages do, it works well for them, and 
NO, I don't see the advantages in doing the opposite of what 
works well for the others.


And if you want to see change, it'll take you to champion it via 
a PR (and defending it in the resulting discussion).




It's not plagiarism, it's just common sense...


No idea why you think I would care, since programming languages 
have always been about copying good stuff from others.




Re: Confusion over enforce and assert - both are compiled out in release mode

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 27 August 2017 at 10:46:53 UTC, Andrew Chapman wrote:

[...]

Oh interesting.  Does DUB support passing through the 
--enable-contracts flag to ldc?  Also, if this is an ldc 
specific thing it's probably not a good idea i'd imagine, since 
in the future one may want to use a GDC, or DMD?


Also, with regards to gdc, its release mode `-frelease` option is 
explicitly specified in the manual as being shorthand for a 
specific set of options:



This is equivalent to compiling with the following options:

gdc -fno-assert -fbounds-check=safe -fno-invariants \
-fno-postconditions -fno-preconditions -fno-switch-errors


As it doesn't seem to turn on/off any other options / 
optimizations, you can use `"dflags-gdc": [...]` to specify your 
own set of "release" options without losing anything.
In particular, I would overwrite dub's default "release" build 
type [1] and add your own per compiler build settings, so dub 
won't pass `-frelease` to gdc when using `dub --build=release`.


[1] https://code.dlang.org/package-format?lang=json#build-types


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 11:50:18 UTC, Ecstatic Coder wrote:
On Sunday, 27 August 2017 at 11:36:57 UTC, Moritz Maxeiner 
wrote:
On Sunday, 27 August 2017 at 11:26:58 UTC, Ecstatic Coder 
wrote:

[...]

Just add the 4 examples I suggested, and you have a brand-new 
beginner-friendly website without changing anything else to 
the website canvas.


If you want a change in D's web presence submit a PR to [1] or 
one of [2] as appropriate.


[1] https://github.com/dlang/dlang.org
[2] https://github.com/dlang-tour


No problem, but first I'd like to have the design changes 
validated prior to making them.


That's how web developers do with their customers.

1. suggest the changes
2. have the changes accepted
3. make the changes


Unless I've missed you being contracted to do these changes, this 
model doesn't apply. It's not other people who want you to do 
some work (and as they are paying you have a vested interest in 
evaluating it), it's you who wants changes.




Because there is no interest in making changes that won't be 
accepted eventually...


To be frank, this is how things usually get done in open source 
(outside of corporate interests):
One commits to doing something, does it, then asks for people to 
review the result, and finally tries to get it accepted.
One does this often enough successfully in a particular group of 
people and one earns recognition by their group peers 
(reputation).
Starting and/or participating in discussions can be valuable to 
the community and may yield reputation, as well, but one can't 
realistically expect receiving preapproval for ideas unless one 
has proven to actually follow through on them and contribute 
tangible results.


[1] And you do this often enough successfully in a particular 
project you earn recognition there


Re: Promoting TutorialsPoint's D tutorial

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 11:26:58 UTC, Ecstatic Coder wrote:

[...]

Just add the 4 examples I suggested, and you have a brand-new 
beginner-friendly website without changing anything else to the 
website canvas.


If you want a change in D's web presence submit a PR to [1] or 
one of [2] as appropriate.


[1] https://github.com/dlang/dlang.org
[2] https://github.com/dlang-tour


Re: Editor recommendations for new users.

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d

On Sunday, 27 August 2017 at 10:05:29 UTC, Nicholas Wilson wrote:
So I will be doing a workshop on programming for the biology 
department at my university and I was wondering what would best 
suit the users.


The following are a must:
support windows & mac ( the more consistent between the two 
the better)

free
no large install footprint, preferably simple install 
procedure (running on laptops)

syntax highlighting
straightforward to use

anything else is a bonus.

Whats your experience with what you use?

Many thanks
Nic


Textadept [1] matches your requirements.
I found it lightweight, responsive, and easy to use
I'm only on Sublime Text [2][3] because it's shinier.

[1] https://foicica.com/textadept/
[2] https://www.sublimetext.com/
[3] Depending on your definition of free (libre vs beer) it might 
also qualify


Re: Confusion over enforce and assert - both are compiled out in release mode

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 27 August 2017 at 10:46:53 UTC, Andrew Chapman wrote:
On Sunday, 27 August 2017 at 10:37:50 UTC, Moritz Maxeiner 
wrote:

[...]


Oh interesting.  Does DUB support passing through the 
--enable-contracts flag to ldc?


Sure, using platform specific build settings [1] such as 
`"dflags-ldc": ["--enable-contracts"]`.


Also, if this is an ldc specific thing it's probably not a good 
idea i'd imagine, since in the future one may want to use a 
GDC, or DMD?


If you want to use another compiler that supports it, add the 
appropriate "dflags-COMPILER" setting to your package file.
With regards to dmd: Don't use it for release builds, use gdc or 
ldc (better optimizations).


https://code.dlang.org/package-format?lang=json#build-settings


Re: Confusion over enforce and assert - both are compiled out in release mode

2017-08-27 Thread Moritz Maxeiner via Digitalmars-d-learn

On Sunday, 27 August 2017 at 10:17:47 UTC, Andrew Chapman wrote:

On Sunday, 27 August 2017 at 10:08:15 UTC, ag0aep6g wrote:

On 08/27/2017 12:02 PM, Andrew Chapman wrote:
However, I am finding that BOTH enforce and assert are 
compiled out by dmd and ldc in release mode.  Is there a 
standard way of doing what enforce does inside an "in" 
contract block that will work in release mode?


I'm guessing I should write my own function for now.
The whole `in` block is ignored in release mode. Doesn't 
matter what you put in there. Nothing of it will be compiled.


Thanks, that explains it.  I think it's a bit of a shame that 
the "in" blocks can't be used in release mode as the clarity 
they provide for precondition logic is wonderful.


If you need that, you could compile using ldc in release mode 
(which you probably want to do anyway):


--- test.d ---
import std.exception;
import std.stdio;

void foo(int x) in { enforce(x > 0); } body
{

}

void bar(int x) in { assert(x > 0); } body
{

}

void baz(int x) in { if (!(x > 0)) assert(0); } body
{

}

void main()
{
(-1).foo.assertThrown;
(-1).bar;
(-1).baz;
}
--

$ ldc2 test.d
-> failed assert in bar's in contract terminates the program

$ ldc2 -release test.d
-> failed assertThrown in main terminates the program

$ ldc2 -release -enable-contracts test.d
-> failed assert in baz's in contract terminates the program

$ ldc2 -release -enable-contracts -enable-asserts test.d
-> failed assert in bar's in contract terminates the program


  1   2   3   4   5   6   >