Re: OK to do bit-packing with GC pointers?

2022-07-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 23 July 2022 at 08:32:12 UTC, Ola Fosheim Grøstad 
wrote:
Also `char*` can't work as char cannot contain pointers. I 
guess you would need to use `void*`.


Well, that is wrong for the standard collection where the typing 
is dynamic (based on allocation not on type system). Then any 
pointer should work as long as it stays within the boundary of 
the allocated object.




Re: OK to do bit-packing with GC pointers?

2022-07-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 23 July 2022 at 00:55:14 UTC, Steven Schveighoffer 
wrote:
Probably. Though like I said, I doubt it matters. Maybe someone 
with more type theory or GC knowledge knows whether it should 
be OK or not.


Has nothing to do with type theory, only about GC implementation. 
But his object has no pointer in it so it should be allocated in 
a "no scan" heap, that can't work.


Also `char*` can't work as char cannot contain pointers. I guess 
you would need to use `void*`.


But you need to understand the internals of the GC implementation 
to do stuff like this.




Re: null == "" is true?

2022-07-19 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 19 July 2022 at 15:30:30 UTC, Bienlein wrote:
If the destination of a carrier was set to null, it implied 
that the destination was currently undefined. Then the robot 
brought the carrier to some rack where it was put aside for a 
while till the planning system had created a new production 
plan. The number of null pointer exceptions we had to fix 
because auf this was countless. Never make null imply some 
meaning ...


This is due to a lack of proper abstractions. Null always has a 
meaning, if it didn't, you would not need it. In this particular 
case you could have used a singleton instead.


In a relational database you can choose between having null or 
having a large number of tables. The latter performs poorly. I am 
not talking about how to implement null, I am talking about the 
concept of information being absent. If you have to represent 
that, you have a defacto "null", doesn't matter if it is a 
singleton or address zero or NaN or FFFE (for unicode).







Re: null == "" is true?

2022-07-19 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 19 July 2022 at 10:29:40 UTC, Antonio wrote:

NULL is not the same that UNDEFINED

The distintion is really important:  NULL is a valid value 
(i.e.: The person phonenumber is NULL in database)... Of 
course, you can represent this concept natively in you language 
(Nullable, Optional, Maybe ...) but it is not the same that 
UNDEFINED... because UNDFINED says "This property has not been 
assigned to DTO... do not take it into account".


IIRC someone wrote a master thesis about the different roles for 
null values in databases and came up with many different null 
situations (was it five?).


E.g. for floating point you have two different types of 
not-a-number, one for representing a conversion 
failure/corruption of a datafield and another one for 
representing a computational result that cannot be represented. 
If we consider zero to be "empty" then floating point has four 
different "empty" values (+0, -0, qNaN, sNaN).





Re: null == "" is true?

2022-07-18 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 18 July 2022 at 17:20:04 UTC, Kagamin wrote:

Difference between null and empty is useless.


Not really. `null` typically means that the value is missing, 
irrelevant and not usable, which is quite different from having 
"" as a usable value.




Re: DIP1000

2022-07-02 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 2 July 2022 at 09:42:17 UTC, Loara wrote:
But you first said "The compiler should deduce if a `scope` 
pointer points to heap allocated data or not" and when someone 
tells you this should happen only for not `scope` pointers you 
say "But the compiler doesn't do that".


This discussion isn't going anywhere… :-)

(Please don't use quotation marks unless you actually quote.).



Re: DIP1000

2022-06-30 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 30 June 2022 at 19:56:38 UTC, Loara wrote:
The deduction can happen even if you don't use `scope` 
attribute.


I don't understand what you mean, it could, but it doesn't. And 
if it did then you would not need `scope`…


When you use `scope` attribute you're saying to compiler "You 
have to allocate this object on the stack, don't try to use 
heap allocation".


These are function pointer parameters, how could it trigger 
allocation on the heap?


If you want to let compiler to decide what is the best approach 
then don't use `scope`.


But that doesn't work.

So `scope int v;` is equal to `int v;` since `v` is not a 
pointer, whereas `scope int *p` is different from `int *v;` 
since the latter can't point to stack allocated integers. This 
is the difference.


No, the latter can most certainly point to any integer. It is 
just that scope/scope ref/return ref is to be checked in @safe. 
Unfortunately it is way too limiting. Even standard flow typing 
appears to be as strong or stronger.


Since stack allocated objects are destroyed in the reverse 
order allowing a recursive `scope` attribute is a bit dangerous 
as you can see in the following example:


If there are destructors then you can think of each stack 
allocated variable as introducing a invisible scope, but the 
compiler can keep track of this easily.


So the compiler knows the ordering. So if my function imposes and 
order on the lifetimes of the parameters, then the compiler 
should be able to check that the ordering constraint is satisfied.


Again if you want to let the compiler to deduce then don't use 
`scope`.


But then it won't compile at all in @safe!



Re: DIP1000

2022-06-29 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 29 June 2022 at 05:51:26 UTC, bauss wrote:
Not necessarily, especially if the fields aren't value types. 
You can have stack allocated "objects" with pointers to heap 
allocated memory (heap allocated "objects".)


Those are not fields, those are separate objects… The compiler 
knows which is a field (part of the object).


You can't, or rather you shouldn't have stack allocated fields 
within heap allocated "objects" however; as that will almost be 
guaranteed to lead to problems.


That is perfectly ok if you use RAII and manage life times.

Ex. from your example then even if the "node struct" you pass 
was allocated on the stack, then the memory the "next" pointer 
points to might not be allocated same place.


Unless I'm misunderstanding what you're trying to say.


You did :). If you look at the post I made in general about 
DIP1000 and flow typing you see that I annotate scope with a 
number to indicate life time ordering.


If you have `connect(int* a,int* b){a.next = b}` then the 
compiler can deduce that the signature with formal parameters 
should be `connect(scope!N(int*) a, scope_or_earlier!N(int*) b)`. 
The compiler then checks that the actual parameters at the call 
site are subtypes (same type or proper subtype).










Re: DIP1000

2022-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 28 June 2022 at 21:40:44 UTC, Loara wrote:
When `connect()` returns may happen that `b` is destroyed but 
`a` not, so `a.next` contains a dangling pointer that


Not when connect returns, but the scope that connect was called 
from. Still, this can be deduced, you just have to give the 
scopes an ordering.



not-scoped variable (`a.next` is not `scope` since this 
attribute is not transitive)


Well, that is a flaw, if the object is stack allocated then the 
fields are too.


is clearly dangerous since `connect` doesn't know which between 
`a` and `b` terminates first.


The compiler could easily deduce it. It is not difficult to see 
what the life time constraint must be.




Re: How can I check if a type is a pointer?

2022-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 25 June 2022 at 14:18:10 UTC, rempas wrote:
In that example, the first comparison will execute the second 
branch and for the second comparison,
it will execute the first branch. Of course, this trait doesn't 
exist I used an example to show what I want

to do. Any ideas?


I guess you can look at the source code for

https://dlang.org/phobos/std_traits.html#isPointer




Re: DIP1000

2022-06-24 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 24 June 2022 at 17:53:07 UTC, Loara wrote:

Why you should use `scope` here?


I probably shouldn't. That is why I asked in the «learn» forum…

A `scope` pointer variable may refer to a stack allocated 
object that may be destroyed once the function returns.


The objects are in the calling function, not in the connect() 
function. So they are not destroyed when the connect() function 
returns.


Since a linked list should not contain pointers to stack 
allocated data you should avoid entirely the `scope` attribute 
and use instead `const`.


It was only an example. There is nothing wrong with connecting 
objects on the stack.




Re: DIP1000

2022-06-24 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 24 June 2022 at 09:08:25 UTC, Dukc wrote:
On Friday, 24 June 2022 at 05:11:13 UTC, Ola Fosheim Grøstad 
wrote:


No, the lifetime is the same if there is no destructor. Being 
counter intuitive is poor usability.


It depends on whether you expect the rules to be smart or 
simple. Smart is not necessarily better, as the Unix philosophy 
tells you. I'm sure you have experience about programs that are 
unpredictable and thus frustating to use because they try to be 
too smart.


If this feature is meant to be used by application developers and 
not only library authors then it has to match their intuitive 
mental model of life times. I would expect all simple value types 
to have the same lifetime as the scope.


The other option is to somehow instill a mental model in all 
users that simple types like ints also having default destructors.


If it only is for library authors, then it is ok to deviate from 
"intuition".




Re: DIP1000

2022-06-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 24 June 2022 at 03:03:52 UTC, Paul Backus wrote:
On Thursday, 23 June 2022 at 21:34:27 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 23 June 2022 at 21:05:57 UTC, ag0aep6g wrote:

It's a weird rule for sure.


Another slightly annoying thing is that it cares about 
destruction order when there are no destructors.


If there are no destructors the lifetime ought to be 
considered the same for variables in the same scope.


Having different lifetime rules for different types is worse UX 
than having the same lifetime rules for all types.


Imagine writing a generic function which passes all of your 
unit tests, and then fails when you try to use it in real code, 
because you forgot to test it with a type that has a destructor.


No, the lifetime is the same if there is no destructor. Being 
counter intuitive is poor usability.


If you want to help library authors you issue a warning for 
generic code only.




Re: DIP1000

2022-06-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 23 June 2022 at 21:05:57 UTC, ag0aep6g wrote:

It's a weird rule for sure.


Another slightly annoying thing is that it cares about 
destruction order when there are no destructors.


If there are no destructors the lifetime ought to be considered 
the same for variables in the same scope.





Re: DIP1000

2022-06-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 23 June 2022 at 21:05:57 UTC, ag0aep6g wrote:
It means "may be returned or copied to the first parameter" 
(https://dlang.org/spec/function.html#param-storage). You 
cannot escape via other parameters. It's a weird rule for sure.


Too complicated for what it does… Maybe @trusted isn't so bad 
after all.




Re: DIP1000

2022-06-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 23 June 2022 at 19:38:12 UTC, ag0aep6g wrote:

```d
void connect(ref scope node a, return scope node* b)
```


Thanks, so the `return scope` means «allow escape», not 
necessarily return?


But that only works for this very special case. It falls apart 
when you try to add a third node. As far as I understand, 
`scope` cannot handle linked lists. A `scope` pointer cannot 
point to another `scope` pointer.


One can do two levels, but not three. Got it. Works for some 
basic data-structures.




DIP1000

2022-06-23 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

How am I supposed to write this:

```d
import std;
@safe:

struct node {
node* next;
}

auto connect(scope node* a, scope node* b)
{
a.next = b;
}

void main()
{
node x;
node y;
connect(,);
}

```


Error: scope variable `b` assigned to non-scope `(*a).next`




Re: Need Help with Encapsulation in Python!

2022-06-17 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 17 June 2022 at 14:14:57 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 17 June 2022 at 13:58:15 UTC, Soham Mukherjee wrote:
Is there any better way to achieve encapsulation in Python? 
Please rectify my code if possible.


One convention is to use "self._fieldname" for protected and 
"self.__fieldname" for private class attributes.


Example:

```python
class A:
 def __init__(self):
 self._x = 3
 self.__x = 4

a = A()
print(a.__dict__)
```
will produce the output: ```{'_x': 3, '_A__x': 4}``` .

As you can see the "self.__x" field has the name "_A__x" which 
makes it "hidden", but not inaccessible.


But you can use external tools to check for misuse.



Re: Need Help with Encapsulation in Python!

2022-06-17 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 17 June 2022 at 13:58:15 UTC, Soham Mukherjee wrote:
Is there any better way to achieve encapsulation in Python? 
Please rectify my code if possible.


One convention is to use "self._fieldname" for protected and 
"self.__fieldname" for private class attributes.





Re: Dynamic chain for ranges?

2022-06-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Monday, 13 June 2022 at 14:03:13 UTC, Steven Schveighoffer 
wrote:
Merge sort only works if it's easy to manipulate the structure, 
like a linked-list, or to build a new structure, like if you 
don't care about allocating a new array every iteration.


The easiest option is to have two buffers that can hold all 
items, in the last merge you merge back to the input-storage. But 
yeah, it is only «fast» for very large arrays.




Re: Dynamic chain for ranges?

2022-06-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Monday, 13 June 2022 at 13:22:52 UTC, Steven Schveighoffer 
wrote:
I would think sort(joiner([arr1, arr2, arr3])) should work, but 
it's not a random access range.


Yes, I got the error «must satisfy the following constraint: 
isRandomAccessRange!Range`».


It would be relatively easy to make it work as a random access 
range if arr1, arr2, etc were fixed size slices.


Or I guess, use insertion-sort followed by merge-sort.





Re: Dynamic chain for ranges?

2022-06-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 13 June 2022 at 09:08:40 UTC, Salih Dincer wrote:
On Monday, 13 June 2022 at 08:51:03 UTC, Ola Fosheim Grøstad 
wrote:


But it would be much more useful in practice if "chain" was a 
dynamic array.


Already so:


I meant something like: chain = [arr1, arr2, …, arrN]

I don't use ranges, but I thought this specific use case could be 
valuable.


Imagine you have a chunked datastructure of unknown lengths, and 
you want to "redistribute" items without reallocation.




Dynamic chain for ranges?

2022-06-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
Is there a dynamic chain primitive, so that you can add to the 
chain at runtime?


Context: the following example on the front page is interesting.

```d
void main()
{
int[] arr1 = [4, 9, 7];
int[] arr2 = [5, 2, 1, 10];
int[] arr3 = [6, 8, 3];
sort(chain(arr1, arr2, arr3));
writefln("%s\n%s\n%s\n", arr1, arr2, arr3);
}
```

But it would be much more useful in practice if "chain" was a 
dynamic array.




Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 June 2022 at 18:08:17 UTC, Ola Fosheim Grøstad wrote:

There is no reason for D to undercut users of @safe code.


(Wrong usage of the term «undercut», but you get the idea…)




Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Monday, 6 June 2022 at 17:52:12 UTC, Steven Schveighoffer 
wrote:
Then that's part of the algorithm. You can use an Exception, 
and then handle the exception by calling the real sort. If in 
the future, you decide that it can properly sort with that 
improvement, you remove the Exception.


That is different from e.g. using a proven algorithm, like 
quicksort, but failing to implement it properly.


No? Why do you find it so? Adding a buggy optimization is exactly 
failing to implement it properly. There is a reference, the 
optimization should work exactly like the reference, but didn't.


Using asserts in @safe code should be no different than using 
asserts in Python code.


Python code <=> safe D code.

Python library implemented in C <=> trusted D code.

There is no reason for D to undercut users of @safe code. If 
anything D should try to use @safe to provide benefits that C++ 
users don't get.






Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 June 2022 at 16:15:19 UTC, Ola Fosheim Grøstad wrote:
On Monday, 6 June 2022 at 15:54:16 UTC, Steven Schveighoffer 
wrote:
If it's an expected part of the sorting algorithm that it *may 
fail to sort*, then that's not an Error, that's an Exception.


No, it is not expected. Let me rewrite my answer to Sebastiaan 
to fit with the sort scenario:


Let me sketch up another scenario. Let's say I am making an 
online game and I need early feedback from beta-testers. So I run 
my beta-service with lots of asserts and logging, when actors 
fail I discard them and relaunch them.


If the server went down on the first assert I wouldn't be able to 
test my server at all, because there would be no users willing to 
participate in a betatest where the server goes down every 20 
seconds! That is a very bad high risk-factor, that totally 
dominates this use scenario.


An engineer has to fill words such as «reliability», «utility», 
«probability» and «risk» with meaning that match the use scenario 
and make deliberate choices (cost-benefit-risk considerations). 
That includes choosing an actor model, and each actor has to 
prevent failure from affecting other actors. (by definition of 
«actor»).





Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Monday, 6 June 2022 at 15:54:16 UTC, Steven Schveighoffer 
wrote:
If it's an expected part of the sorting algorithm that it *may 
fail to sort*, then that's not an Error, that's an Exception.


No, it is not expected. Let me rewrite my answer to Sebastiaan to 
fit with the sort scenario:


For instance, you may have a formally verified sort function, but 
it is too slow. So you optimize one selected bottle neck, but 
that cannot be verified, because verification is hard. That 
specific unverified softspot is guarded by an assert. The 
compiler may remove it or not.


Your shipped product fails, because the hard to read optimization 
wasn't perfect. So you trap the thrown assert and call the 
reference implementation instead.


The cool thing with actors/tasks is that you can make them as 
small and targeted and revert to fallbacks if they fail.


(Assuming 100% @safe code.)

It says that the programmer cannot attribute exactly where this 
went wrong because otherwise, he would have accounted for it, 
or thrown an Exception instead (or some other mitigation).


He can make a judgement. If this happened in a safe pure function 
then it would most likely be the result of what is what meant to 
do: check that the assumptions of the algorithm holds.



Anything from memory corruption, to faulty hardware, to bugs in 
the code, could be the cause.


That is not what asserts check! They will be removed if the 
static analyzer is powerful enough. All the information to remove 
the assert should be in the source code.


You are asserting that *given all the constraints of the type 
system* then the assert should hold.


Memory corruption could make an assert succeed when it should 
not, because then anything can happen! It cannot catch memory 
corruption reliably because it is not excluded from optimization. 
You need something else for that, something that turns off 
optimization for all asserts.




Exactly. Use Exceptions if it's recoverable, Errors if it's not.


This is what is not true, asserts says only something about the 
algorithm it is embedded in, it says that the algorithm makes a 
wrong assumption, and that is all. It says nothing about the 
calling environment.



A failed assert could be because of undefined behavior. It 
doesn't *imply* it, but it cannot be ruled out.


And a successful assert could happen because of undefined 
behaviour or optimization! If you want these types of guards then 
you need to propose a type of asserts that would be excluded from 
optimization. (which might be a good idea!)


In the case of UB anything can happen. It is up to the programmer 
to make that judgment based on the use scenario. It is a matter 
of probabilisitic calculations in relation to the use scenario of 
the application.


As I pointed out elsewhere: «reliability» has to be defined in 
terms of the use scenario by a skilled human being, not in terms 
of some kind of abstract thinking about compiler design.






Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 June 2022 at 06:56:46 UTC, Ola Fosheim Grøstad wrote:

On Monday, 6 June 2022 at 06:14:59 UTC, Sebastiaan Koppe wrote:

Those are not places where you would put an assert.

The only place to put an assert is when *you* know there is no 
recovery.


No, asserts are orthogonal to recovery. They just specify the 
assumed constraints in the implementation of the algorithm. You 
can view them as comments that can be read by a computer and 
checked for that specific function.


I guess an informal way to express this is:

*Asserts are comments that you would need to make when explaining 
why the algorithm works to another person (or to convince 
yourself that it works).*


As far as unnecessary asserts, it would be nice to have something 
more powerful than static assert, something that could reason 
about runtime issues that are simple and issue errors if it could 
not establish it. E.g.:

```
int i = 0;
…later…
i++;
…much later…
compiletime_assert(i>0);
```



Re: Comparing Exceptions and Errors

2022-06-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 June 2022 at 06:14:59 UTC, Sebastiaan Koppe wrote:

Those are not places where you would put an assert.

The only place to put an assert is when *you* know there is no 
recovery.


No, asserts are orthogonal to recovery. They just specify the 
assumed constraints in the implementation of the algorithm. You 
can view them as comments that can be read by a computer and 
checked for that specific function.


For instance you can have a formally proven reference 
implementation full of asserts, then one optimized version where 
you keep critical asserts or just the post condition. If the 
optimized version fails, then you can revert to the reference 
(with no or few asserts, because it is already formally verified).


There is nothing wrong with having many asserts or asserts you 
«know» to hold. They are helpful when you modify code and 
datastructures.


Maybe one could have more selective ways to leave out asserts 
(e.g. based on revision) so that you remove most asserts in 
actors that has not changed since version 1.0 and retain more 
asserts in new actors.


Also, if you fully check the full post condition (in @safe code) 
then you can remove all asserts in release as they are 
inconsequential.


So the picture is more nuanced and it should be up to the 
programmer to decide, but maybe a more expressive and selective 
regime is useful.


Re: Comparing Exceptions and Errorsj

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 June 2022 at 04:59:05 UTC, Ola Fosheim Grøstad wrote:
An assert only says that the logic of that particular function 
is not meeting the SPEC.


Actually, the proper semantics are weaker than that, the spec 
would be preconditions and post conditions. Asserts are actually 
just steps to guide a solver to find a proof faster (or at all) 
for that particular function.


In practice asserts are «checked comments» about what the 
programmer assumed when he/she implemented the algorithm of that 
function.


A failed assert just says that the assumption was wrong.

If the compiler can prove that an assert holds given legal input, 
then it will be removed. As such, it follows that asserts has 
nothing to do with undefined behaviour in terms of illegal input. 
The assert is not there to guard against it so the compiler 
removed it as it assumes that the type constraints of the input 
holds.




Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Sunday, 5 June 2022 at 23:57:19 UTC, Steven Schveighoffer 
wrote:
It basically says "If this condition is false, this entire 
program is invalid, and I don't know how to continue from here."


No, it says: this function failed to uphold this invariant. You 
can perfectly well recover if you know what that function touches.


For instance if a sort function fails, then you can call a slower 
sort function.


Or in terms of actors/tasks: if one actor-solver fails 
numerically, then you can recover and use a different 
actor-solver.


An assert says nothing about the whole program.

An assert only says that the logic of that particular function is 
not meeting the SPEC.


That’s all. If you use asserts for something else then you don’t 
follow the semantic purpose of asserts.


Only the programmer knows if recovery is possible, not the 
compiler.


A failed assert is not implying undefined behaviour in @safe code.




Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Sunday, 5 June 2022 at 21:08:11 UTC, Steven Schveighoffer 
wrote:
Just FYI, this is a *different discussion* from whether Errors 
should be recoverable.


Ok, but do you a difference between being recoverable anywhere 
and being recoverable at the exit-point of an execution unit like 
an Actor/Task?


Whether specific conditions are considered an Error or an 
Exception is something on which I have differing opinions than 
the language.


Ok, like null-dereferencing and division-by-zero perhaps.

However, I do NOT have a problem with having an 
assert/invariant mechanism to help prove my program is correct.


Or rather the opposite, prove that a specific function is 
incorrect for a specific input configuration.


The question is, if a single function is incorrect for some 
specific input, why would you do anything more than disabling 
that function?





Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 14:24:39 UTC, Ali Çehreli wrote:

  void add(int i) {// <-- Both arrays always same size
a ~= i;
b ~= i * 10;
  }

  void foo() {
assert(a.length == b.length);  // <-- Invariant check
// ...
  }


Maybe it would help if we can agree that this assert ought to be 
statically proven to hold and the assert would therefore never be 
evaluated in running code. Emitting asserts is just a sign of 
failed statical analysis (which is common, but that is the most 
sensible interpretation from a verification viewpoint).


The purpose of asserts is not to test the environment. The 
purpose is to "prove" that the specified invariant of the 
function holds for all legal input.


It follows that the goal of an assert is not to test if the 
program is in a legal state!


I understand why you say this, but if this was the case then we 
could not remove any asserts by statical analysis. :-/




Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
Ok, so I am a bit confused about what is Error and what is not… 
According to core.exception there is wide array of runtime Errors:


```
RangeError
ArrayIndexError
ArraySliceError
AssertError
FinalizeError
OutOfMemoryError
InvalidMemoryOperationError
ForkError
SwitchError
```

I am not sure that any of these should terminate anything outside 
the offending actor, but I could be wrong as it is hard to tell 
exactly when some of those are thrown.


InvalidMemoryOperationError sound bad, of course, but the docs 
says «An invalid memory operation error occurs in circumstances 
when the garbage collector has detected an operation it cannot 
reliably handle. The default D GC is not re-entrant, so this can 
happen due to allocations done from within finalizers called 
during a garbage collection cycle.»


This sounds more like an issue that needs fixing…


On Sunday, 5 June 2022 at 14:24:39 UTC, Ali Çehreli wrote:
The code is written in a way that both arrays will *always* 
have equal number of elements. And there is a "silly" check for 
that invariant. So far so good. The issue is what to do *when* 
that assert fails.


Are you sure that it was a silly programmer mistake? I am not 
sure at all.


Ok, so this is a question of the layers:

```
 top layer --
D  | E
   |
- middle layer --
B  | C
   |
   |
 bottom layer ---
A



If the failed assert is happening in a lower layer A then code on 
the outer layer should fault (or roll back a transaction). 
Whether it is reasonable to capture that error and suppress it 
depends on how independent you want those layers to be in your 
architecture. It also depends on the nature of layer A.


If the failed assert happens in middle layer section B, then the 
D would be affected, but not A, C or E.


The key issue is that the nature of layers is informal in the 
language (in fact, in most languages, a weakness), so only the 
programmer can tell what is reasonable or not.


In fact, when we think about it; most aspects about what is 
expected from a program is informal… so it is very difficult to 
make judgment at the compiler level.


Is the only other culprit the runtime kernel? I really don't 
know who else may be involved.


I mean the application's «custom actor kernel», a hardened piece 
of code that is not modified frequently and heavily tested. The 
goal is to keep uncertainty local to an actor so that you can 
toss out misbehaving actors and keep the rest of the system 
working smoothly (99.5% of the time, 50 minutes downtime per 
week).


Actors are expected to contain bugs because the game system is 
continuously modified (to balance the game play, to get new 
content, more excitement, whatever…). This is why we want 100% 
@safe code as a feature.



There are also bugs in unrelated actor code writing over each 
others' memory.


But that cannot happen if I decide to make actors 100% @safe and 
only let them interact with each other through my «custom actor 
kernel»?



You are free to choose to catch Errors and continue under the 
assumption that it is safe to do so.


Now I am confused!! Steven wrote «I've thought in the past that 
throwing an error really should not throw, but log the error 
(including the call stack), and then exit without even attempting 
to unwind the stack.»


Surely, the perspective being promoted is to make sure that 
Errors cannot be stopped from propagating? That is why this 
becomes a discussion?


If an Error can propagate through "nothrow" then the compiler 
should emit handler code for it and issue a warning. If you don't 
want that then the programmer should safe guard against it, 
meaning: manually catch and abort or do manual checks in all 
locations above it where Errors can arise. The compiler knows 
where.


Not really sure why D has "nothrow", it doesn't really fit with 
the rest of the language? To interface with C++ perhaps?? If that 
is the case, just adopt C++ "noexcept" semantics, use assert() 
for debugging only, in "nothrow" code. And discourage the use of 
"nothrow". Heck, throw in a compiler switch to turn off "nothrow" 
if that is safe.



The advice in the article still holds for me. I think the main 
difference is in the assumptions we make about an Errors: Is it 
a logic error in actor code or some crazy state that will cause 
weirder results if we continue. We can't know for sure.


And this is no different from other languages with a runtime. You 
cannot be sure, but it probably isn't a runtime issue, and even 
if it was… players will be more upset by not being able to play 
than to have some weird effects happening. Same for chat service. 
Same for being able to read Wikipedia-caches (worse to have no 
access than to have 1% of pages garbled on display until the 
server is updated).


Different settings need different solutions. So, maybe 
interfacing with C++ requires "nothrow", but if that is the only 
reason… why let everybody pay a 

Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 11:13:48 UTC, Adam D Ruppe wrote:
On Sunday, 5 June 2022 at 10:38:44 UTC, Ola Fosheim Grøstad 
wrote:
That is a workaround that makes other languages more 
attractive.


It is what a lot of real world things do since it provides 
additional layers of protection while still being pretty easy 
to use.


Yes, it is not a bad solution in many cases. It is a classic 
solution for web servers, but web servers typically don't retain 
a variety of mutable state of many different types (a webserver 
can do well with just a shared memcache).



My code did and still does simply catch Error and proceed. Most 
Errors are completely harmless; RangeError, for example, is 
thrown *before* the out-of-bounds write, meaning it prevented 
the damage, not just notified you of it. It was fully 
recoverable in practice before that silly Dec '17 dmd change, 
and tbh still is after it in a lot of code.


Yes, this is pretty much how I write validators of input in 
Python web services. I don't care if the validator failed or if 
the input failed, in either case the input has to be stopped, but 
the service can continue. If there is a suspected logic failure, 
log and/or send notification to the developer, but for the end 
user it is good enough that they "for now" send some other input 
(e.g. avoiding some unicode letter or whatever).



Or perhaps redefine RangeError into RangeException, 
OutOfMemoryError as OutOfMemoryException, and such for the 
other preventative cases and carry on with joy, productivity, 
and correctness.


For a system level language such decisions ought to be in the 
hand of the developer so that he can write his own runtime. Maybe 
some kind of transformers so that libraries can produce Errors, 
but have them transformed to something else at boundaries.


If I want to write an actor-based runtime and do all my 
application code as 100% @safe actors that are fully «reliable», 
then that should be possible in a system level language.


The programmer's definition and evaluation of «reliability» in 
the context of a «casual game server» should carry more weight 
than some out-of-context-reasoning about «computer science» (it 
isn't actually).





Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 00:18:43 UTC, Adam Ruppe wrote:

Run it in a separate process with minimum shared memory.


That is a workaround that makes other languages more attractive. 
It does not work when I want to have 1000+ actors in my game 
server (which at this point most likely will be written in Go, 
sadly).


So a separate process is basically a non-solution. At this point 
Go seems to be the best technology of all the bad options! A 
pity, as it is not an enjoyable language IMO, but the goals are 
more important than the means…


The problem here is that people are running an argument as if 
most D software is control-software for chemical processes or 
database kernels. Then people quote writings on safety measures 
that has been evolved in the context/mindset of control-software 
in the 80s and 90s. And that makes no sense, when only Funkwerk 
(and possibly 1 or 2 others) write such software in D.


The reality is, most languages call C-libraries and have C-code 
in their runtime, under the assumption that those C-libaries and 
runtimes have been hardened and proven to be reliable with low 
probability of failure.


*Correctness **is** probabilistic.* Even in the case of 100% 
verified code, as there is a possibility that the spec is wrong.


*Reliability measures are dependent on the used context*. What 
«reliable» means depends on skilled judgment utilized to evaluate 
the software in the use context. «reliable» is not a context 
independent absolute.






Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 07:28:52 UTC, Sebastiaan Koppe wrote:

Go has panic. Other languages have similar constructs.


And recover.

So D will never be able to provide actors and provide fault 
tolerance.


I guess it depends on the programmer.


But it isn’t if you cannot prevent Error from propagating.


Is the service in a usable state?


Yes, the actor code failed. The actor code change frequently, 
not the runtime kernel.


The actor code is free to call anything, including @system


@trusted code. How is this different from FFI in other languages? 
As a programmer you make a judgment. The D argument is to prevent 
the programmer from making a judgment?


How would the actor framework know when an error is due to a 
silly bug in an isolated


How can I know that a failure in Python code isn’t caused by C

There is no difference in the situation. I make a judgment as a 
programmer.


Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 07:21:18 UTC, Ola Fosheim Grøstad wrote:
You can make the same argument for an interpreter: if an assert 
fails in the intrrpreter code then that could be a fault in the 
interpreter therefore you should shut down all programs being 
run by that interpreter.


Typo: if an assert fails in the interpreted code, then that could 
be a sign that the interpreter itself is flawed. Should you then 
stop all programs run by that interpreter?


The point: in the real world you need more options than the 
nuclear option. Pessimizing everywhere is not giving higher 
satisfaction for the end user.


(Iphone keyboard)


Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 03:43:16 UTC, Paul Backus wrote:

See here:

https://bloomberg.github.io/bde-resources/pdfs/Contracts_Undefined_Behavior_and_Defensive_Programming.pdf


Not all software is banking applications. If an assert fails that 
means that the program logic is wrong, not that the program is in 
an invalid state. (Invalid state is a stochastic consequence and 
detection can happen mich later).


So that means that you should just stop the program. It means 
that you should shut down all running instances of that program 
on all computers across the globe. That is the logical 
consequence of this perspective, and it makes sense for a banking 
database.


It does not make sense for the constructor of Ants in a computer 
game service.


It is better to have an enjoyable game delivered with fewer ants 
than a black screen all weekend.


You can make the same argument for an interpreter: if an assert 
fails in the intrrpreter code then that could be a fault in the 
interpreter therefore you should shut down all programs being run 
by that interpreter.


The reality is that software is layered. Faults at different 
layers should have different consequences at the discretion of a 
capable programmer.




Re: Comparing Exceptions and Errors

2022-06-05 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 5 June 2022 at 00:40:26 UTC, Ali Çehreli wrote:
Errors are thrown when the program is discovered to be in an 
invalid state. We don't know what happened and when. For 
example, we don't know whether the memory has been overwritten 
by some rogue code.


That is not very probable in 100% @safe code. You are basically 
saying that D cannot compete with Go and other «safe» languages. 
Dereferencing a null pointer usually means that some code failed 
to create an instance and check for it.


My code can detect that the failure is local under the assumption 
thay the runtime isnt a piece of trash.


What happened? What can we assume. We don't know and we cannot 
assume any state.


So D will never be able to provide actors and provide fault 
tolerance.



Is the service in a usable state?


Yes, the actor code failed. The actor code change frequently, not 
the runtime kernel.


Possibly. Not shutting down might produce incorrect results. Do 
we prefer up but incorrect or dead?


I prefer that service keeps running: chat service, game service, 
data delivered with hashed «checksum». Not all software are 
database engines where you have to pessimize about bugs in the 
runtime kernel.


If the data delivered is good enough for the client and better 
than nothing then the service should keep running!!!


I hope there is a way of aborting the program when there are 
invariant


Invariants are USUALLY local. I dont write global spaghetti  
code. As a programmer you should be able to distinguish between 
local and global failure.


You are assuming that the programmer is incapable of making 
judgements. That is assuming way too much.







Re: Comparing Exceptions and Errors

2022-06-04 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 4 June 2022 at 22:01:57 UTC, Steven Schveighoffer 
wrote:
You shouldn't retry on Error, and you shouldn't actually have 
any Errors thrown.


So what do you have to do to avoid having Errors thrown? How do 
you make your task/handler fault tolerant in 100% @safe code?


Re: Comparing Exceptions and Errors

2022-06-04 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 4 June 2022 at 18:32:48 UTC, Sebastiaan Koppe wrote:
Most wont throw a Error though. And typical services have 
canary releases and rollback.


So you just fix it, which you have to do anyway.


I take it you mean manual rollback, but the key issue is that you 
want to retry on failure. Not infrequently the source for the 
failure will be in the environment, the code just didn't handle 
the failure correctly.


On a service with SLA of 99.999% the probable "failure time" 
would be 6 seconds per week, so if you can retry you may still 
run fine even if you failed to check correctly for an error on 
that specific subsystem. That makes the system more 
resilient/robust.





Re: Comparing Exceptions and Errors

2022-06-04 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 4 June 2022 at 16:55:50 UTC, Sebastiaan Koppe wrote:
The reasoning is simple: Error + nothrow will sidestep any RAII 
you may have. Since you cannot know what potentially wasn't 
destructed, the only safe course of action is to abandon ship.


Why can't Error unwind the stack properly?


Yes, in plenty of cases that is completely overkill.

Then again, programs should be written to not assert in the 
first place.


In a not-miniscule service you can be pretty certain that some ±1 
bugs will be there, especially in a service that is receiving new 
features on a regular basis. So, if you get an index/key 
error/null-dereferencing that wasn't checked for, unwinding that 
actor/task/handler makes sense, shutting down the service doesn't 
make sense.


If you allow the whole service to go down then you have opened a 
Denial-of-Service vector, which is a problem if the service is 
attracting attention from teens/immature adults. (E.g. games, 
social apps, political sites, educational sites etc).


Considering most asserts I have seen are either due to a bad 
api or just laziness - and shouldn't have to exist in the first 
place - maybe it's not that bad.


Well, problem is if a usually reliable subsystem is 
intermittently flaky, and you get this behaviour, then that isn't 
something you can assume will be caught in tests (you cannot test 
for all scenarios, only the likely ones).


I am not a fan of Go, but it is difficult to find a more balanced 
solution, and Go 1.18 has generics, so it is becoming more 
competitive!


At the end of the day you don't have to love a language to choose 
it… and for a service, runtime behaviour is more important than 
other issues.






Re: Comparing Exceptions and Errors

2022-06-04 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 4 June 2022 at 01:17:28 UTC, Steven Schveighoffer 
wrote:
I've thought in the past that throwing an error really should 
not throw, but log the error (including the call stack), and 
then exit without even attempting to unwind the stack. But code 
at least expects an attempt to throw the Error up the stack, so 
code that is expecting to catch it would break.


This is too harsh for a service that is read-only, meaning a 
service that only read from a database and never writes to it. 
All running threads have to be given a chance to exit gracefully, 
at the very minimum.


What is the source for these errors anyway? A filesystem not 
responding? A crashed device driver? A race condition? A 
deadlock? Starvation? Many sources for errors can be recovered 
from by rescheduling in a different order at a different time.


What I'd like to see is a fault tolerant 100% @safe actor pattern 
with local per-actor GC. By fault tolerant I mean that the actor 
is killed and then a new actor is rescheduled (could be an 
alternative "reference" implementation or the same after a time 
delay).


Also, what it is the purpose of @safe if you have to kill all 
threads? Such rigidity will just make Go look all the more 
attractive for service providers!




Re: Why are structs and classes so different?

2022-05-19 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 18 May 2022 at 10:53:03 UTC, forkit wrote:

Here is a very interesting article that researches this subject.


Yeah, he got it right. It is syntax sugar that makes verbose C++ 
code easier to read. Use struct for internal objects and tuple 
like usage and class for major objects in your model.


But Simula used class for more: coroutines and library modules. 
So, you could take a class and use its scope in your code and 
thereby get access to the symbols/definitions in it.


The successor to Simula, called Beta, took it one step further 
and used the class concept for functions, block scopes, loops, 
almost everything.


The language Self went even further and collapsed the concept of 
class and object into one... perhaps too far... People prefer 
Typescript over Javascript for a reason. 


Re: Why are structs and classes so different?

2022-05-16 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 16 May 2022 at 15:18:11 UTC, Kevin Bailey wrote:
I would say that assignment isn't any more or less of an issue, 
as long as you pass by reference:


What I mean is if you write your code for the superclass, and 
later add a subclass with some invariants that depends on the 
superclass fields… then upcast an instance of the subclass to the 
superclass and pass it on… your risk the same issue. The subclass 
invariant can be broken because of sloppy modelling.


The premise for "slicing" being an issue is that people who write 
functions have no clue about how the type system works or how to 
properly model. So it is a lot of fuzz over nothing, because with 
that assumption you can make the exact same argument for 
references. (I've never had any practical issues related to 
"slicing", ever…)


Besides, slicing can very well be exactly what you want, e.g. if 
you have a super-class "EntityID" and want to build an array of 
all the EntityIDs… nothing wrong about slicing out the EntityID 
for all subclass instances when building that array.


Now, there are many other issues with C++, mostly related to the 
fact that they give very high priority to avoid overhead. E.g. 
take a new feature like std::span, if you create a subspan 
("slice" in D terminology) and the original span does not contain 
enough elements then C++ regards that as undefined behaviour and 
will happily return a span into arbitrary memory past the end of 
the original span. C++ is very unforgiving in comparison to 
"higher level" languages like D.


If we extend this reasoning to D classes, one can say that D 
classes are convenience constructs that does not pay as much 
attention to overhead. One example of this is how interfaces are 
implemented, each interface will take a full pointer in every 
instance of the class. The monitor mutex is another example. And 
how pointers to classes are different (simpler syntax) than 
pointers to struct also suggests that classes are designed more 
for convenience than principles.


Whether this is good or bad probably depends on the user group:

1. Those that are primarily interested in low level with a bit of 
high level might think it is "too much" and favour structs.


2. Those that are primarily interested in high level with a bit 
of low level might think otherwise.


In C++ everyone belong to group 1. In other system languages such 
as D and Rust you probably have a large silent majority in group 
2. (All those programmers that find C++ to be too brittle or hard 
to get into, but want comparable performance.)











Re: Why are structs and classes so different?

2022-05-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 15 May 2022 at 20:05:05 UTC, Kevin Bailey wrote:
I've been programming in C++ full time for 32 years, so I'm 
familiar with slicing. It doesn't look to me like there's a 
concern here.


Yes, slicing is not the issue. Slicing is a problem if you do 
"assignments" through a reference that is typed as the 
superclass… so references won't help.


The original idea might have been that structs were value-types, 
but that is not the case in D. They are not, you can take the 
address…


So what you effectively have is that structs follow C layout 
rules and D classes are not required to (AFAIK), but there is an 
ABI and C++ layout classes, so that freedom is somewhat limited… 
D classes also have a mutex for monitors.


In an ideal world one might be tempted to think that classes were 
ideal candidates for alternative memory management mechanisms 
since they are allocated on the heap. Sadly this is also not true 
since D is a system level programming language and you get to 
bypass that "type characteristic" and can force them onto the 
stack if you desire to do so…








Re: Back to Basics at DConf?

2022-05-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 13 May 2022 at 04:39:46 UTC, Tejas wrote:
On Friday, 13 May 2022 at 04:19:26 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 13 May 2022 at 03:31:53 UTC, Ali Çehreli wrote:

On 5/12/22 18:56, forkit wrote:

> So...you want to do a talk that challenges D's complexity, 
> by

getting
> back to basics?

I wasn't thinking about challenging complexity but it gives 
me ideas.


I am looking for concrete topics like templates, classes, 
ranges, rvalues, etc. Are those interesting?


I suggest: patterns for @nogc allocation and where D is going 
with move semantics and reference counting.


Basically, where is D heading with @nogc?

Take each pattern from c++ and Rust and show the D counter 
part, with an objective analysis that covers pitfalls and 
areas that need more work.


I feel that it'd be best if any video discussion/talk about 
move semantics happens _after_ [DIP 
1040](https://github.com/dlang/DIPs/blob/72f41cffe68ff1f2d4c033b5728ef37e282461dd/DIPs/DIP1040.md#initialization) is merged/rejected, so that the video doesn't become irrelevan after only a few years.


I think the purpose of conferences is to assess the current state 
and be forward looking. After 14 months I would expect Walter to 
know if it is going to be put to rest or not, so just email him I 
guess?




Re: Back to Basics at DConf?

2022-05-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 13 May 2022 at 03:31:53 UTC, Ali Çehreli wrote:

On 5/12/22 18:56, forkit wrote:

> So...you want to do a talk that challenges D's complexity, by
getting
> back to basics?

I wasn't thinking about challenging complexity but it gives me 
ideas.


I am looking for concrete topics like templates, classes, 
ranges, rvalues, etc. Are those interesting?


I suggest: patterns for @nogc allocation and where D is going 
with move semantics and reference counting.


Basically, where is D heading with @nogc?

Take each pattern from c++ and Rust and show the D counter part, 
with an objective analysis that covers pitfalls and areas that 
need more work.





Re: While loop on global variable optimised away?

2022-05-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 11 May 2022 at 10:01:18 UTC, Johan wrote:
Any function call (inside the loop) for which it cannot be 
proven that it never modifies your memory variable will work. 
That's why I'm pretty sure that mutex lock/unlock will work.


 I think the common semantics ought to be that everything written 
by thread A  before it releases the mutex will be visible to 
thread B when it aquires the same mutex, and any assumptions 
beyond this are nonportable?




Re: What are (were) the most difficult parts of D?

2022-05-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 11 May 2022 at 05:41:35 UTC, Ali Çehreli wrote:
What are you stuck at? What was the most difficult features to 
understand? etc.


Also, if you intend to use the responses for planning purposes, 
keep in mind that people who read the forums regularly are more 
informed about pitfalls than other users. Forum-dwellers have a 
richer understanding of the landscape and probably view the 
feature set in a different light.





Re: What are (were) the most difficult parts of D?

2022-05-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 11 May 2022 at 05:41:35 UTC, Ali Çehreli wrote:
What are you stuck at? What was the most difficult features to 
understand? etc.


No singular feature, but the overall cognitive load if you use 
the language sporadic. Which could be most users that don't use 
it for work or have it as their main hobby.


As with all languages, cognitive load increases when things do 
not work as one would intuitively expect or if a feature is 
incomplete in some specific case. D has some of the same issues 
as C++, you either deal with details or you can choose to go more 
minimalistic in your usage.


If you read other people's D code you can easily (IMO) see that 
there is no unified style or usage of features, so it seems like 
people stick to their own understanding of «good code», which can 
make it tiresome to read D code by others (e.g. standard 
library). I guess some good coherent educational material is 
missing, people are finding their own way. There are no obvious 
codebases that highlight «best practice». I suspect 
meta-programming is pushed too much and that this has a negative 
effect on legibility of code bases.


You basically increase the cognitive load further if you choose 
to be slightly different from other languages, and focus on 
special cases. Examples: the proposal for interpolated strings, 
dip1000, certain aspects of D's generics syntax etc.


When you already have high cognitive load you should be very 
careful about not adding more "slightly unusual/unexpected" 
semantics/syntax.



To make it more meaningful, what is your experience with other 
languages?


I believe what makes Python a favoured language by many is that 
you can use it sporadically without relearning. Spending less 
time reading documentation is always a win.


To do things right in C++ you have to look things up all the 
time, this is only ok for people who use it many hours every week.


In general I think the ergonomics would be much better if the 
historical baggage from C/C++ had been pushed aside in favour of 
a more intuitive clean approach. I am not sure if many new users 
have a good understanding of C/C++ anyway.


Of course, this perspective is highly irrelevant as it is clear 
now that the current direction of language evolution is to 
continue to absorb C and not really be provide a clean 
improvement over C, but more like an extension. (More like C++ 
and Objective-C than Rust).


I think it will be difficult to attract young programmers with 
this approach as they are less likely to be comfortable with the 
peculiarities of C.




Re: Parameters declared as the alias of a template won't accept the arguments of the same type.

2022-05-04 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Wednesday, 4 May 2022 at 10:15:18 UTC, bauss wrote:

It can be a bug __and__ an enhancement.


Alright, but we need a DIP to get the enhancement which can be 
in. :-) I don't think anything will improve without one.


I would assume that C++ template resolution is O(N), so I am not 
as pessimistic as Ali, but I could be wrong.




Re: Parameters declared as the alias of a template won't accept the arguments of the same type.

2022-05-03 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 3 May 2022 at 15:41:08 UTC, Ali Çehreli wrote:

We also have NP-completeness.


Ok, so C++ has similar limitations when you have a template with 
an unknown parameter in a function parameter, but is this because 
it would be NPC? Also, do we know that it cannot be resolved for 
the typical case in reasonable time? Maybe one can add 
constraints and heuristics that keeps it reasonable?


No, the question is whether D is in a category of programming 
languages that it isn't.


Fair enough, but type unification needs to be done differently 
anyway, just to support aliases being recognized in function 
calls. Why not look at what is possible before going there? 
Certainly, having something more expressive than C++ would not be 
a bad thing?





Re: Parameters declared as the alias of a template won't accept the arguments of the same type.

2022-05-03 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 3 May 2022 at 06:20:53 UTC, Elfstone wrote:
Yeah, I understand some cases are impossible, and to be 
avoided. I believe your example is also impossible in C++, but 
it's better the compiler do its job when it's totally possible 
- needless to say, C++ compilers can deduce my _dot_.
Constraints/Concepts are useful, but what's needed here is a 
literal _alias_. There's no ambiguity, no extra template 
parameters introduced in the declaration.


I haven't read all of the posts in this thread, but D in general 
doesn't do proper type unification so template composition in D 
is not as useful as in C++. It is discussed here:


https://forum.dlang.org/post/rt26mu$2c6q$1...@digitalmars.com

As you see, someone will have to write a DIP to fix this bug, as 
the language authors don't consider it a bug, but an enhancement.


I've never got around to do it myself, but if you or someone else 
write the DIP, then I would like to help out with the wording if 
needed.




Re: How to verify DMD download with GPG?

2022-02-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 14 February 2022 at 15:51:59 UTC, Kagamin wrote:

3AAF1A18E61F6FAA3B7193E4DB8C5218B9329CF8 is 0xDB8C5218B9329CF8
This shortening was supposed to improve user experience.


Yes, I eventually noticed that the shortened fingerprints were 
used, but only after posting the OP… It is natural to scan for 
the start of a string when looking over a larger set to find a 
match, unless you use GPG more frequently than once every 5 years 
and remember to look for the tail of the fingerprint…


GPG is a good concept, but the usability lacks that extra polish 
that could make it attractive to a broader audience. What makes 
HTTPS so widespread is the usability impact is low once you have 
a mechanism for distributing the key data for verifying 
connections. GPG could've done something similar.







Re: How to verify DMD download with GPG?

2022-02-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 14 February 2022 at 18:12:25 UTC, Era Scarecrow wrote:
 For Linux sources there's MD5 and SHA-1 hashes i believe. If 
you have two or three hashes for comparison, the likelyhood of 
someone changing something without those two changing seems 
VRY low.


I usually grab the sources from github, but for binaries I'd like 
higher resolution SHAs presented on a secured server, different 
from the one hosting the files. The main concern is that hackers 
might obtain the access to both the binary and the website that 
presents the SHA…


PGP is good in theory, but if the keys are presented in a context 
that isn't secured then what good use it is? There ought to be 
some central authority for PGP/GPG, it isn't all that difficult 
to implement either. The central authority could verify the 
email. Without that SHA is easier to deal with…





Re: How to verify DMD download with GPG?

2022-02-09 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 8 February 2022 at 20:15:50 UTC, forkit wrote:

btw. the key is listed there - not sure what you mean.


I didn't see "3AAF1A18E61F6FAA3B7193E4DB8C5218B9329CF8" on the 
listing on the webpage https://dlang.org/gpg_keys.html


That pages is not similar to what I get with  "--list-keys 
--with-subkey-fingerprints".


```
pub   rsa4096 2014-09-01 [SC] [expired: 2020-03-25]
  AFC7DB45693D62BB472BF27BAB8FE924C2F7E724
uid   [ expired] Martin Nowak (dawg) 
uid   [ expired] Martin Nowak 


uid   [ expired] Martin Nowak 
uid   [ expired] Martin Nowak 

pub   rsa2048 2016-01-29 [SC]
  BBED1B088CED7F958917FBE85004F0FAD051576D
uid   [ unknown] Vladimir Panteleev 


sub   rsa2048 2016-01-29 [E]
  AB78BAC596B648B59A41983A3850E93043EBB12C

pub   rsa4096 2015-11-24 [SC] [expires: 2026-03-23]
  8FDB8D357AF468A9428ACE3C2055F76601A36FB0
uid   [ unknown] Sebastian Wilzbach 
uid   [ unknown] Sebastian Wilzbach 

pub   rsa4096 2018-03-26 [SC] [expired: 2020-03-25]
  F77158814C19E5E07BA1079A65394AFEF4A68565
uid   [ expired] DLang Nightly (bot) 



pub   rsa4096 2020-03-12 [SC] [expires: 2022-03-12]
  F46A10D0AB44C3D15DD65797BCDD73FFC3EB6146
uid   [ unknown] Martin Nowak 
uid   [ unknown] Martin Nowak 


uid   [ unknown] Martin Nowak 
sub   rsa4096 2020-03-12 [E] [expires: 2022-03-12]
  B92614C21EC5779DC678468E9A813AD3C11508BC
sub   rsa4096 2020-03-12 [S] [expires: 2022-03-12]
  3AAF1A18E61F6FAA3B7193E4DB8C5218B9329CF8
```

The last two lines on the webpage are:

```
sub   rsa4096/0x9A813AD3C11508BC 2020-03-12 [E] [expires: 
2022-03-12]
sub   rsa4096/0xDB8C5218B9329CF8 2020-03-12 [S] [expires: 
2022-03-12]

```

That is why I thought something was wrong.



Re: How to verify DMD download with GPG?

2022-02-08 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 8 February 2022 at 20:15:50 UTC, forkit wrote:

On what basis would you trust the key? Think about it ;-)


Oh well, seems like the keyring has nothing to do with trust. 
This model with no certification authority is annoying. Now I 
remember why I never bother with PGP.




How to verify DMD download with GPG?

2022-02-08 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
I don't use GPG often, so I probably did something wrong, and 
failed to get a trusted verification. I do like the idea that a 
hacker cannot change the signature file if gaining access to the 
web/file hosts, but how to verify it in secure way?


I did this:

```
/opt/local/bin/gpg --keyring ./d-keyring.gpg --verify 
dmd.2.098.1.osx.tar.xz.sig dmd.2.098.1.osx.tar.xz

gpg: Signature made søn 19 des 22:35:47 2021 CET
gpg:using RSA key 
3AAF1A18E61F6FAA3B7193E4DB8C5218B9329CF8

gpg: Good signature from "Martin Nowak " [unknown]
gpg: aka "Martin Nowak 
" [unknown]
gpg: aka "Martin Nowak " 
[unknown]

gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs 
to the owner.
Primary key fingerprint: F46A 10D0 AB44 C3D1 5DD6  5797 BCDD 73FF 
C3EB 6146
 Subkey fingerprint: 3AAF 1A18 E61F 6FAA 3B71  93E4 DB8C 5218 
B932 9CF8

```

I also did not find the key listed here:

https://dlang.org/download.html



Re: How to pass a class by (const) reference to C++

2021-12-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 13 December 2021 at 12:51:17 UTC, evilrat wrote:
That example is still looks very conspicuous because it is very 
likely does nothing on the caller side in C++ as it passes a 
copy.


Yes, I wouldn't want to use it, maybe manual mangling is better, 
but still painful. ```const A&``` is so common in C++ API's that 
it really should be supported out-of-the-box.  All it takes is 
adding a deref-type-constructor to the D language spec, e.g. 
```ref const(@deref(A))```




Re: How to pass a class by (const) reference to C++

2021-12-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 13 December 2021 at 12:08:30 UTC, evilrat wrote:
Yeah but it sucks to have making C++ wrapper just for this. I 
think either pragma mangle to hammer it in place or helper 
dummy struct with class layout that mimics this shim logic is a 
better solution in such cases.

Literally anything but building C++ code twice for a project.


Does something like this work?


```
class _A {}
struct A {}

extern(C++) void CppFunc(ref const(A) arg);

void func(_A a){
CppFunc(*cast(A*)a);
}
```


Re: How to pass a class by (const) reference to C++

2021-12-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Monday, 13 December 2021 at 12:16:03 UTC, Ola Fosheim Grøstad 
wrote:

class _A {}
struct A {}


With ```extern(C++)``` on these…



Re: How to loop through characters of a string in D language?

2021-12-13 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 10 December 2021 at 18:47:53 UTC, Stanislav Blinov 
wrote:
Threshold could be relative for short strings and absolute for 
long ones. Makes little sense reallocating if you only waste a 
couple bytes, but makes perfect sense if you've just removed 
pages and pages of semicolons ;)


Like this?

```
@safe:

string prematureoptimizations(string s, char stripchar) @trusted {
import core.memory;
immutable uint flags = 
GC.BlkAttr.NO_SCAN|GC.BlkAttr.APPENDABLE;

char* begin = cast(char*)GC.malloc(s.length+1, flags);
char* end = begin + 1;
foreach(c; s) {
immutable size_t notsemicolon = c != stripchar;
// hack: avoid conditional by writing semicolon outside 
buffer

*(end - notsemicolon) = c;
end += notsemicolon;
}
immutable size_t len = end - begin - 1;
begin = cast(char*)GC.realloc(begin, len, flags);
return cast(string)begin[0..len];
}

void main() {
import std.stdio;
string str = "abc;def;ab";
writeln(prematureoptimizations(str, ';'));
}

```


Re: How to loop through characters of a string in D language?

2021-12-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
Of course, since it is easy to mess up and use ranges in the 
wrong way, you might want to add ```assert```s. That is most 
likely *helpful* to newbies that might want to use your kickass 
library function:


```
auto helpfuldeatheater(char stripchar)(string str) {
struct voldemort {
immutable(char)* begin, end;
bool empty(){ return begin == end; }
char front(){ assert(!empty); return *begin; }
char back()@trusted{ assert(!empty); return *(end-1); }
void popFront()@trusted{
assert(!empty);
		while(begin != end){begin++; if (*begin != stripchar) 
break; }

}
void popBack()@trusted{
assert(!empty);
while(begin != end){end--; if (*(end-1) != stripchar) 
break; }

}
this(string s)@trusted{
begin = s.ptr;
end = s.ptr + s.length;
while(begin!=end && *begin==stripchar) begin++;
while(begin!=end && *(end-1)==stripchar) end--;
}
}
return voldemort(str);
}
```



Re: How to loop through characters of a string in D language?

2021-12-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Sunday, 12 December 2021 at 08:58:29 UTC, Ola Fosheim Grøstad 
wrote:

this(string s)@trusted{
begin = s.ptr;
end = s.ptr + s.length;
}
}


Bug, it fails if the string ends or starts with ';'.

Fix:

```
this(string s)@trusted{
begin = s.ptr;
end = s.ptr + s.length;
while(begin!=end && *begin==stripchar) begin++;
while(begin!=end && *(end-1)==stripchar) end--;
}
```



Re: How to loop through characters of a string in D language?

2021-12-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 11 December 2021 at 19:50:55 UTC, russhy wrote:
you need to import a 8k lines of code module that itself 
imports other modules, and then the code is hard to read


I agree.

```
@safe:

auto deatheater(char stripchar)(string str) {
struct voldemort {
immutable(char)* begin, end;
bool empty(){ return begin == end; }
char front(){ return *begin; }
char back()@trusted{ return *(end-1); }
void popFront()@trusted{
while(begin != end){begin++; if (*begin != stripchar) 
break; }

}
void popBack()@trusted{
while(begin != end){end--; if (*(end-1) != stripchar) 
break; }

}
this(string s)@trusted{
begin = s.ptr;
end = s.ptr + s.length;
}
}
return voldemort(str);
}


void main() {
import std.stdio;
string str = "abc;def;ab";
foreach(c; deatheater!';'(str)) write(c);
writeln();
foreach_reverse(c; deatheater!';'(str)) write(c);
}

```




Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 11 December 2021 at 09:40:47 UTC, Stanislav Blinov 
wrote:
On Saturday, 11 December 2021 at 09:34:17 UTC, Ola Fosheim 
Grøstad wrote:



void donttrythisathome(string s, char stripchar) @trusted {
import core.stdc.stdlib;
char* begin = cast(char*)alloca(s.length);


A function with that name, and calling alloca to boot, cannot 
be @trusted ;)


:-)

But I am very trustworthy person! PROMISE!!!



Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 11 December 2021 at 09:34:17 UTC, Ola Fosheim 
Grøstad wrote:

@system


Shouldn't be there. Residual leftovers… (I don't want to confuse 
newbies!)




Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 11 December 2021 at 08:46:32 UTC, forkit wrote:
On Saturday, 11 December 2021 at 08:05:01 UTC, Ola Fosheim 
Grøstad wrote:


Using libraries can trigger hidden allocations.


ok. fine. no unnecessary, hidden allocations then.

// --

module test;

import core.stdc.stdio : putchar;

nothrow @nogc void main()
{
string str = "abc;def;ab";

ulong len = str.length;

for (ulong i = 0; i < len; i++)
{
if (cast(int) str[i] != ';')
putchar(cast(int) str[i]);
}
}

// --


```putchar(…)``` is too slow!


```

@safe:

extern (C) long write(long, const void *, long);


void donttrythisathome(string s, char stripchar) @trusted {
import core.stdc.stdlib;
char* begin = cast(char*)alloca(s.length);
char* end = begin;
foreach(c; s) if (c != stripchar) *(end++) = c;
write(0, begin, end - begin);
}


@system
void main() {
string str = "abc;def;ab";
donttrythisathome(str, ';');
}




Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Saturday, 11 December 2021 at 09:26:06 UTC, Stanislav Blinov 
wrote:
What you're showing is... indeed, don't do this, but I fail to 
see what that has to do with my suggestion, or the original 
code.


You worry too much, just have fun with differing ways of 
expressing the same thing.


(Recursion can be completely fine if the compiler supports it 
well. Tail recursion that is, not my example.)


Again, that is a different algorithm than what I was responding 
to.


Slightly different, but same idea. Isn't the point of this thread 
to present N different ways of doing the same thing? :-)






Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 11 December 2021 at 08:46:32 UTC, forkit wrote:
On Saturday, 11 December 2021 at 08:05:01 UTC, Ola Fosheim 
Grøstad wrote:


Using libraries can trigger hidden allocations.


ok. fine. no unnecessary, hidden allocations then.

// --

module test;

import core.stdc.stdio : putchar;

nothrow @nogc void main()
{
string str = "abc;def;ab";

ulong len = str.length;

for (ulong i = 0; i < len; i++)
{
if (cast(int) str[i] != ';')
putchar(cast(int) str[i]);
}
}

// --


```putchar(…)``` is too slow!


```

@safe:

extern (C) long write(long, const void *, long);


void donttrythisathome(string s, char stripchar) @trusted {
import core.stdc.stdlib;
char* begin = cast(char*)alloca(s.length);
char* end = begin;
foreach(c; s) if (c != stripchar) *(end++) = c;
write(0, begin, end - begin);
}


@system
void main() {
string str = "abc;def;ab";
donttrythisathome(str, ';');
}




Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 10 December 2021 at 18:47:53 UTC, Stanislav Blinov 
wrote:
Threshold could be relative for short strings and absolute for 
long ones. Makes little sense reallocating if you only waste a 
couple bytes, but makes perfect sense if you've just removed 
pages and pages of semicolons ;)


Scanning short strings twice is not all that expensive as they 
will stay in the CPU cache when you run over them a second time.


```
import std.stdio;

@safe:

string stripsemicolons(string s) @trusted {
int i,n;
foreach(c; s) n += c != ';'; // premature optimization
auto r = new char[](n);
foreach(c; s) if (c != ';') r[i++] = c;
return cast(string)r;
}

int main() {
writeln(stripsemicolons("abc;def;ab"));
return 0;
}
```



Re: How to loop through characters of a string in D language?

2021-12-11 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 11 December 2021 at 00:39:15 UTC, forkit wrote:

On Friday, 10 December 2021 at 22:35:58 UTC, Arjan wrote:


"abc;def;ghi".tr(";", "", "d" );



I don't think we have enough ways of doing the same thing yet...

so here's one more..

"abc;def;ghi".substitute(";", "");


Using libraries can trigger hidden allocations.

```
import std.stdio;

string garbagefountain(string s){
if (s.length == 1) return s == ";" ? "" : s;
return garbagefountain(s[0..$/2]) ~ 
garbagefountain(s[$/2..$]);

}

int main() {
writeln(garbagefountain("abc;def;ab"));
return 0;
}

```



Re: How to loop through characters of a string in D language?

2021-12-10 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 10 December 2021 at 18:47:53 UTC, Stanislav Blinov 
wrote:
Oooh, finally someone suggested to preallocate storage for all 
these reinventions of the wheel :D


```
import std.stdio;

char[] dontdothis(string s, int i=0, int skip=0){
if (s.length == i) return new char[](i - skip);
if (s[i] == ';') return dontdothis(s, i+1, skip+1);
auto r = dontdothis(s, i+1, skip);
r[i-skip] = s[i];
return r;
}

int main() {
string s = "abc;def;ab";
string s_new = cast(string)dontdothis(s);
writeln(s_new);
return 0;
}
```



Re: How to check for overflow when adding/multiplying numbers?

2021-12-06 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 6 December 2021 at 17:46:35 UTC, Dave P. wrote:
I’m porting some C code which uses the gcc intrinsics to do a 
multiply/add with overflow checking. See 
[here](https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html) for reference. Is there a D equivalent?


There is:

https://dlang.org/phobos/core_checkedint.html

I have never used it, so I don't know how well it performs.




Re: Attributes (lexical)

2021-11-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 25 November 2021 at 12:16:50 UTC, rumbu wrote:
I try to base my reasoning on specification, dmd is not always 
a good source of information, the lexer is polluted by old 
features or right now by the ImportC feature, trying to lex D 
an C in the same time.


Alright. I haven't looked at it after the ```importC``` feature 
was started on.


(The lexer code takes a bit of browsing to get used to, but it 
isn't all that challenging once you are into it.)




Re: Attributes (lexical)

2021-11-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 25 November 2021 at 10:41:05 UTC, Rumbu wrote:
I am not asking this questions out of thin air, I am trying to 
write a conforming lexer and this is one of the ambiguities.


I think it is easier to just look at the lexer in the dmd source. 
The D language does not really have a proper spec, it is more 
like an effort to document the implementation.




Re: Attributes (lexical)

2021-11-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 25 November 2021 at 08:06:27 UTC, rumbu wrote:

Is that ok or it's a lexer bug?


Yes. The lexer just eats whitespace and the parser accepts way 
too much.




Re: Exceptions names list

2021-11-16 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 16 November 2021 at 10:28:18 UTC, Imperatorn wrote:
I just did a quick grep of phobos and matched the derived 
exceptions and got this:

Base64Exception
MessageMismatch
OwnerTerminated
LinkTerminated
PriorityMessageException
MailboxFull
TidMissingException
ConvException
CSVException
EncodingException


Neat list. I guess a few of them are template-parameter names?

But yeah, the list shows that there is an ontology-job that ought 
to be done. Naming-schemes and making sure that the most specific 
exceptions subclass more generic ones, I guess.




Re: Exceptions names list

2021-11-16 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Tuesday, 16 November 2021 at 09:41:15 UTC, Ola Fosheim Grøstad 
wrote:
want to follow the same error handling pattern so that user 
code can replace a standard library function with an enhanced 
third party function without changing the error handling code 
down the call-tree.


Eh, *up* the call-tree. Why are call-trees upside-down?





Re: Exceptions names list

2021-11-16 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Tuesday, 16 November 2021 at 09:29:34 UTC, Ali Çehreli wrote:
Further, it doesn't feel right to throw e.g. 
std.conv.ConvException from my foo.bar module. The cases where 
another module's exception fitting my use closely feels so rare 
that I wouldn't even think about reaching for an existing 
exception of another module or a library.


I think it is a mistake for ```std``` to not collect all 
exceptions in one location. If you want to write a library that 
is a natural extension of the standard library then you also want 
to follow the same error handling pattern so that user code can 
replace a standard library function with an enhanced third party 
function without changing the error handling code down the 
call-tree.


Re: Cannot compile C file using ImportC

2021-11-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 12 November 2021 at 08:12:22 UTC, Ola Fosheim Grøstad 
wrote:
Maybe there are some metaprogramming advantages, but I cannot 
think of any.


Well, one advantage might be that it could be easier to do escape 
analysis of C code. Not really sure if there is a difference 
compared to doing it over the IR, though.





Re: Cannot compile C file using ImportC

2021-11-12 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Thursday, 11 November 2021 at 02:03:22 UTC, Steven 
Schveighoffer wrote:
I don't think ImportC is that much of a game changer (you can 
already make C bindings with quite a small effort, and there 
are tools to get you 90% there), but a broken/not working 
ImportC will be a huge drawback to the language, so it's 
important to get it right.


The advantage I though of when I suggested something similar many 
years ago was that it would enable inlining of C functions, but 
since then LLVM has gotten whole program optimization over the 
IR, so now the advantage is more limited.


Maybe there are some metaprogramming advantages, but I cannot 
think of any.




Re: abs and minimum values

2021-10-31 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 31 October 2021 at 05:04:33 UTC, Dom DiSc wrote:
This should be no surprise. You need to know what the resulting 
type of int + uint should be. And it is .. uint!  which is 
one of the stupit integer-promotion rules inherited from C.


In C++ it is undefined behaviour to take the absolute value of a 
value that has no positive representation. I assume the same is 
true for C? So you can write a compiler that detects it and fails.


You cannot do this in D as int is defined to represent an 
infinite set of numbers (mapped as a circle). So in D, you could 
say that the abs of the most negative value is a positive value 
that is represented as a negative due to circular wrapping.


If this happens in C then it is a bug. If it happens in D, then 
it is a defined feature of the language.


Re: abs and minimum values

2021-10-29 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 28 October 2021 at 21:23:15 UTC, kyle wrote:
I stumbled into this fun today. I understand why abs yields a 
negative value here with overflow and no promotion. I just want 
to know if it should. Should abs ever return a negative number? 
Thanks.


D has defined signed integers to be modular, so it represent 
numbers mapped to a circle rather than a line.


Is that a good idea? No, but that is what you have.





Re: Are D classes proper reference types?

2021-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 28 June 2021 at 07:44:25 UTC, Mathias LANG wrote:
(the case you are thinking of), but also provides us from 
making non backward-compatible downstream changes (meaning we 
can't change it as we see fit if we realize there is potential 
for optimization).


Has this ever happened?

Waiting on "consensus" is an easy way to avoid doing any kind 
of work :)


You don't need strict consensus, but you need at least one 
compiler team to agree that it is a worthwhile.


I'm fairly sure most large achievements that have been 
undertaken by people in this community (that were not W) have 
been done without their (W's) blessing. People just went 
ahead and did it. But obviously those people cared more about 
getting things done than spending time discussing it on the 
forums.


Was that a snide comment? Totally uncalled for, I certainly don't 
depend on anyones blessing to play with my own fork, but it does 
not affect anything outside it.


Making a PR for a repo without acceptance is utterly pointless 
and a waste of effort. Nobody should do it. They will just end up 
feeling miserable about what they could instead have spent their 
time on (including kids and family).


I am 100% confident that there has been a massive waste of effort 
in the D history that is due to a lack of coordination. Ranging 
from libraries that went nowhere to PRs that dried up and died.


Individual PRs won't fix the whole. The whole can only be fixed 
with a plan. To get to a place where you can plan you need to 
form a vision. To form a vision you need to work towards 
consensus.


You cannot fix poor organization with PRs. The PR-demanding crowd 
is off the rails irrational. Cut down on the excuses, start 
planning!


(What large achievements are you speaking of, by the way?)



Re: Are D classes proper reference types?

2021-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Monday, 28 June 2021 at 06:25:39 UTC, Mike Parker wrote:
Slack isn't like our IRC or Discord channels. It's more async 
like the forums.


Thanks for the info, I might look into Slack if it doesn't 
require me to install anything. If it is an official channel you 
might consider adding it to the community menu or at least:


https://wiki.dlang.org/Get_involved




Re: Are D classes proper reference types?

2021-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 27 June 2021 at 23:20:38 UTC, Mathias LANG wrote:
- It locks us in a position where we depend on an external 
committee / implementation to define our ABI.


Well, that could be an issue, but it is not likely to change fast 
or frequently so I don't think it is a high risk approach.


In any case, if you feel like it's worth it @Ola, you could 
start to look into what it takes (druntime/dmd wise) and start 
to submit PR.


There is no point unless there is consensus. You first need to 
get consensus otherwise it is means throwing time into the 
garbage bin.



For dlang devs questions, I recommend to use the dlang slack.


I don't have much time for chat, I prefer async communication.



Re: Are D classes proper reference types?

2021-06-27 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 27 June 2021 at 10:11:44 UTC, kinke wrote:
It's not about classes using monitors themselves, it's about 
people potentially using `synchronized (obj)` for some 
arbitrary class reference `obj`.


I wasn't aware this was a thing. If people want this they can 
just embed a mutex in the class themselves. No point in having it 
everywhere.  You usually don't want to coordinate over one 
instance anyway.


Sure, 'just' :D - as it 'just' takes someone to implement it 
(for all supported C++ runtimes). It's always the same problem,


Right, but what does all supported C++ runtimes mean? I thought 
LDC was tied to clang, which I guess means two runtimes? If C++ 
doesn't use arbitrary negative offsets, then D could use those?




Re: Are D classes proper reference types?

2021-06-27 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 27 June 2021 at 09:35:10 UTC, IGotD- wrote:

Probably about all managed languages.


I am sceptical of this assumption. There are no reasons for a GC 
language to require the usage of fat pointers?


When you use a struct as a member variable in another struct 
the data will be expanded into the host struct. If the member 
struct is 16 bytes then the host struct will have grow 16 bytes 
to accommodate that member struct.


This is not the case in D with classes as classes always are 
allocated on the heap using dynamic allocation. This leads to 
more fragmentation and memory consumption.


Ok, I understand what you mean, but classes tend to be used for 
"big objects". I don't think there is anything that prevents a 
private class reference to be replaced by an inline 
representation as an optimization if no references ever leak. If 
you use whole program optimizations such things could also be 
done for public members.


What is holdning D back here is the lack of a high level IR after 
the frontend where such global passes could improve the 
implementation quality.


So, this is at the core of good language design; keep the model 
simple, but enable and allow optimizations. Too much special 
casing and you end up with a language that is difficult to extend 
and a neverending discovery of corner cases and compiler bugs.




Re: Are D classes proper reference types?

2021-06-27 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 27 June 2021 at 08:41:27 UTC, kinke wrote:

Getting rid of the monitor field was discussed multiple times.


You don't have to get rid of it, just implicitly declare it for 
classes that use monitors? I don't think it has to be at a 
specific offset?


The other major and not so trivial difference is that 
extern(C++) classes have no TypeInfo pointer (in the first 
vtable slot for extern(D) classes), which also means that 
dynamic casts don't work, neither in D nor in C++ (for the 
latter, only for instances instiantiated on the D side). 
[AFAIK, most C++ implementations put the - of course totally 
incompatible - *C++* TypeInfo into vtable slot -1.]


But D could just extend C++ typeinfo?




Re: Are D classes proper reference types?

2021-06-27 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 26 June 2021 at 20:03:01 UTC, kinke wrote:
On Saturday, 26 June 2021 at 13:49:25 UTC, Ola Fosheim Grøstad 
wrote:
Is it possible to inherit from a C++ class and get a D 
subclass, and is it possible to inherit from a D class and get 
a C++ class?


Sure thing, with `extern(C++) class` of course.


That is all good, but it will lead to `extern(C++) class` 
replacing D classes. So why not unify right away? Why wait for 
the inevitable?


With C++, you can today, an `extern(C++) class C` is equivalent 
to and mangled as C++ `C*`. You can't pass it directly to some 
`unique_ptr` or `shared_ptr` of course; an according D 
wrapper reflecting the C++ implementation (library-dependent) 
would be needed anyway for correct mangling. It'd be 
implemented as a templated D struct


Yes, this is all good. But if you unify the layout of C++ and D 
classes and use the same layout as C++ shared_ptr for reference 
counted D classes then you can easily move back and forth between 
the languages. I think the presumption that development only 
happens in D and you only use other people's C++ code is ill 
advised. One may use a framework in C++ that one also extend in 
C++, but maybe want to use another language for the high level 
stuff.





Re: Are D classes proper reference types?

2021-06-27 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 21:05:36 UTC, IGotD- wrote:
Yes, that's a tradeoff but one I'm willing to take. I'm 
thinking even bigger managed pointers of perhaps 32 bytes which 
has more metadata like the allocated size. Managed languages in 
general have fat pointers which we see everywhere and it is not 
a big deal.


Which languages use fat pointers? C++ may use it (but is not 
required to).


If you are littering pointers you perhaps should refactor your 
code, use an array if loads of objects of the same type.


This is what I want to avoid as it makes refcounting more 
difficult. If D classes are reference types then they should 
always be referred to through a pointer. If you want to put it in 
an array, use a struct.


Another thing which I'm not that satisfied with D is that there 
is no built in method of expanding member classes into the host 
class like C++ which creates pointer littering and memory 
fragmentation.


Not sure what you mean by expanding? I never liked `alias this` 
for structs, inheritance would be simpler. Is this what you mean 
by expanding?


I think classes in C++ are usually used more like structs in D. 
C++ programmers tend to avoid using virtuals, so D-style classes 
(C++ classes with virtual members) tend to be a smaller part of 
C++ codebases (but it depends on the project, obviously).






Re: Are D classes proper reference types?

2021-06-26 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 26 June 2021 at 07:39:44 UTC, kinke wrote:
I'm pretty sure I haven't seen a weak pointer in C++ yet. I 
don't look at C++ much anymore, but I suspect they are even a 
lot rarer than shared_ptr.


Weak pointers are usually not needed, but sometimes you do need 
them and then not having them becomes a big weakness. Usually 
ones strives to have ownership defined in a way that makes it 
possible to dismantle a graph in a structured way from a single 
thread, and then weak pointers are not needed at all.


Wrt. mixed class hierarchies, being able to inherit from and 
instantiate C++ classes in D is of some priority and mostly 
works today. (Let's not get into discussing multiple 
inheritance here, it's hardly a show-stopper for most use 
cases.)


Is it possible to inherit from a C++ class and get a D subclass, 
and is it possible to inherit from a D class and get a C++ class?


Is Swift a thing outside the Apple universe (I admittedly 
despise ;))? It's surely much better than their Objective-C 
crap, but still...


The Apple universe is pretty big, we can dislike that, but that 
does not change the market…


So the goal would be to put the ref-count on a negative offset so 
that the layout can match up with C++ or Swift. I don't think 
that implies any big disadvantages, but I could be wrong. The 
only way to find out is to hash out the alternatives.


So for rare use cases like shared_ptr/weak pointer interop, a 
library solution (just like they are in C++) is IMO enough.


But the best solution is to get to a place where you can hand 
D-objects to other languages with ease without doing a runtime 
conversion from one layout to another.




Re: Are D classes proper reference types?

2021-06-26 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 23:55:40 UTC, kinke wrote:
I cannot imagine how weak pointers would work without an ugly 
extra indirection layer. If we're on the same page, we're 
talking about embedding the reference counter *directly* in the 
class instance, and the class ref still pointing directly to 
the instance.


So, my understanding is that C++ `make_shared` may allocate the 
reference chunk and the object in the same memory area, so that 
the reference count/weak counter is at a negative offset. That 
way you can free up the object while retaining the counter. (At 
some fragmentation cost, if the counter isn't freed.) This may 
give some cache advantages over separate allocation.


Maybe @weak could be an annotation, but I haven't given this much 
thought. If we think about it; You don't have to pass around weak 
pointers, so there is no reason for parameters to be marked weak? 
Only pointer-fields in structs or classes? In function bodies you 
would typically not want to use a weak pointer as you want to 
extend the lifetime of the object until the function returns.


Also you could mark a class as non-weak, to save the weak counter.

Weak pointers aren't in the language, so I don't see why they 
would matter here. I thought you were after replacing 
GC-allocated class instances by a simple RC scheme.


One goal could be to make a class compatible with C++ or Swift on 
request. Both support weak pointers. You could have multiple 
ref-count layout schemes as long as they all are on negative 
offsets. Just don't mix class hierarchies. So you could mix a  D 
class hierarchy, C++ class-hiearchy and Swift class-hierarchy in 
the same codebase?


In modern C++ code I've been looking at so far, shared_ptr was 
used very rarely (and unique_ptr everywhere). AFAIK, the main 
reason being poor performance due to the extra indirection of 
shared_ptr. So replacing every D class ref by a 
shared_ptr-analogon for interop reasons would seem very 
backwards to me.


In C++ ownership should in general be kept local and one should 
use borrowing pointers (raw pointers) for actual computations, so 
only use owned pointers for transfer of ownership. But that puts 
more of a burden on the developer.




Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 17:37:13 UTC, IGotD- wrote:
You cannot use the most significant bit as it will not work 
with some 32-bit systems. Linux with a 3G kernel position for 
example. Better to use the least significant bit as all 
allocated memory is guaranteed to be aligned. Regardless this 
requires compiler support for masking off this bit.


Hm. Not sure if I follow, I think we are talking about stuffing 
bits into the counter and not the address?


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


But fat pointers are 16 bytes, so quite expensive.



Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:17:20 UTC, kinke wrote:
Wrt. manual non-heap allocations (stack/data segment/emplace 
etc.), you could e.g. reserve the most significant bit of the 
counter to denote such instances and prevent them from being 
free'd (and possibly finalization/destruction too; this would 
need some more thought I suppose).


Destruction is a bit tricky. If people rely on the destructor to 
run when the function returns then that cannot be moved to a 
reference counter. For instance if they have implemented some 
kind of locking mechanism or transaction mechanism with classes…


The most tricky one is emplace though as you have no way of 
releasing the memory without an extra function pointer.


Regarding using high bits in the counter; What you would want is 
to have a cheap increment/decrement and instead take the hit when 
the object is released. So you might actually instead want to 
keep track of the allcation-status in the lower 3 bits and 
instead do ±8, but I am not sure how that affects different CPUs. 
The basic idea, would be to make it so you don't trigger 
destruction on 0, but when the result is/becomes negative.





Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:01:31 UTC, kinke wrote:
Well AFAIK it's mandated by the language, so an RC scheme 
replacing such allocations by heap ones seems like a definite 
step backwards - it's a useful pattern, and as Stefan pointed 
out, definitely in use. You could still stack-allocate but 
accommodate for the counter prefix in the compiler.


Yes, but then I need to mark it as non-freeable.

It's certainly possible as it's a library thing; some existing 
code may assume the returned reference to point to the 
beginning of the passed memory though (where there'd be your 
counter). What you'd definitely need to adjust is 
`__traits(classInstanceSize)`, accomodating for the extra 
counter prefix.


Yes, if people don't make assumptions about where the class ends 
and overwrites some other object, but I suspect pointer 
arithmetics isn't all that common for classes in D code.


There's very likely existing code out there which doesn't use 
druntime's emplace[Initializer], but does it manually.


I guess, but the compiler could have a release note warning 
against this.


ctor. It's probably easier to have the compiler put it into 
static but writable memory, so that you can mess with the 
counter.


Another reason to add the ability to mark it as non-freeable.

All in all, I think a more interesting/feasible approach would 
be abusing the monitor field of extern(D) classes for the 
reference counter. It's the 2nd field (of pointer size) of each 
class instance, directly after the vptr (pointer to vtable). I 
think all monitor access goes through a druntime call, so you 
could hook into there, disallowing any regular monitor access, 
and put this (AFAIK, seldomly used) monitor field to some good 
use.


Yes, if you don't want to support weak pointers. I think you need 
two counters if you want to enable the usage of weak pointers.


One reason to put it at a negative offset is that it makes it 
possible to make it fully compatible with shared_ptr. And then 
you can also have additional fields such as a weak counter or a 
deallocation function pointer.


I don't think maintaining the D ABI is important, so one could 
add additional fields to the class. Maintaining core language 
semantics shouldn't require ABI support I think.










Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 24 June 2021 at 07:28:56 UTC, kinke wrote:
Yes, class *refs* are always pointers. *scope* classes are 
deprecated (I don't think I've ever seen one); with `scope c = 
new Object`, you can have the compiler allocate a class 
*instance* on the stack for you, but `c` is still a *ref*.


But the user code cannot depend on it being stack allocated? So I 
could replace the Object reference with a reference counting 
pointer and put the counter at a negative offset?


`emplace` doesn't allocate, you have to pass the memory 
explicitly.


This is more of a problem. I was thinking about arrays that 
provide an emplace method, then one could replace emplace with 
heap allocation. I guess it isn't really possible to make 
`emplace` with custom memory work gracefully with reference 
counting with ref count at negative offset.


A class *instance* can also live in the static data segment 
(`static immutable myStaticObject = new Object;`);


But it isn't required to? It certainly wouldn't work with 
reference counting if it is stored in read only memory...


`extern(C++)` class instances can also live on the C++ 
heap/stack etc. etc.


Yes, that cannot be avoided.



  1   2   3   4   >