Re: wanting to try a GUI toolkit: needing some advice on which one to choose

2021-06-01 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Tuesday, 1 June 2021 at 05:27:41 UTC, Imperatorn wrote:

On Tuesday, 1 June 2021 at 03:32:50 UTC, someone wrote:

[...]


Yeah, "fragmentation" is a problem. We do a lot of things 90%. 
We need more "100% projects" that are just plug n play rather 
than plug n pray


The solution is to reduce the scope of projects, but that 
requires design and planning. Hobby projects tend to be 
experiments that evolve over time.





Re: Why is this allowed? Inheritance variable shadowing

2021-05-26 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Wednesday, 26 May 2021 at 18:58:47 UTC, JN wrote:
Is there any viable usecase for this behavior? I am not buying 
the "C++ does it and it's legal there" argument. There's a 
reason most serious C++ projects use static analysis tools 
anyway. D should be better and protect against dangerous code 
by default. I think a warning in this case would be warranted.


There are certainly many usecases fo static members, maybe that 
is why designers feel it should be allowed for instance members 
too?


I think this is a clear case of something that should produce a 
warning and provide a silencing annotation fo the cases where you 
really want it.





Re: Template and alloca?

2021-05-26 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Wednesday, 26 May 2021 at 08:38:29 UTC, Imperatorn wrote:
On Wednesday, 26 May 2021 at 07:34:06 UTC, Ola Fosheim Grøstad 
wrote:
On Tuesday, 25 May 2021 at 17:55:17 UTC, Ola Fosheim Grostad 
wrote:
Is it possible to use a template to write a "function" that 
provides initialized stack allocated memory (alloca)? Maybe I 
would have to use mixin?


Nevermind, I've realized that I only need a way to force a 
function to be inlined with 100% certainty. Then I can return 
a structure holding alloca-allocated memory.


Do you accomplish that with just pragma inline? (for future 
reference)


I suspect that LDC allows LLVM to use an external function, but I 
dont know for sure. I would have to look over the code it sends 
to LLVM... But the beauty of Open Source is that it is easy to 
modify LDC!


Anyway, dont do this with C compilers, as they might refuse to 
inline functions with alloca to prevent the stack from 
exploding...


Very much a low level approach, but fun!



Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?

2021-05-25 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Tuesday, 25 May 2021 at 18:12:27 UTC, Gavin Ray wrote:
Would this fix it, or is it just not super viable to hack 
around C++ multiple inheritance in D?


You can do anything you want with structs, raw memory, and 
casting, so it is viable, if you have a strong interest for this.


But if you are not a low level programmer you might find it 
tedious.


Template and alloca?

2021-05-25 Thread Ola Fosheim Grostad via Digitalmars-d-learn
Is it possible to use a template to write a "function" that 
provides initialized stack allocated memory (alloca)? Maybe I 
would have to use mixin?





Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?

2021-05-24 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Monday, 24 May 2021 at 18:52:22 UTC, Ola Fosheim Grostad wrote:
If an informal description is needed then the best option is to 
search the Clang mailing list.


Btw clang docs say they strive to match msvsc, so apparently it 
is platform dependent. The only sensible option is to check with 
Clang people. If using an informal reference such as a book, make 
sure to get an errata, such books tend to have errors...




Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?

2021-05-24 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Monday, 24 May 2021 at 18:46:00 UTC, Guillaume Piolat wrote:
Multiple inheritance is a rare topic here, I doubt too many 
people know how it works internally.


It is described in the link I gave, or? If I tried to give an 
informal description I would probably be inaccurate and that 
would be worse than reading the spec youself.


If an informal description is needed then the best option is to 
search the Clang mailing list.


Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?

2021-05-23 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 23 May 2021 at 21:02:31 UTC, Gavin Ray wrote:
I don't really know anything at all about compilers or 
low-level code -- but is there any high-level notion of 
"inheritance" after it's been compiled?


Yes, in the structure of the vtable, which is why the spec is so 
hard to read.


If possible stick to single inheritance in C++...



Re: How does inheritance and vtables work wrt. C++ and interop with D? Fns w/ Multiple-inheritance args impossible to bind to?

2021-05-23 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 23 May 2021 at 19:44:01 UTC, Gavin Ray wrote:
So one of the problems with generating D code for bindings to 
C++ is that there's no true/direct multiple inheritance.


If anyone happens to understand well how vtables work and the 
way the compiler treats these things, is there a way to hackily 
make semantically-equivalent objects?


I believe Clang and MSVC are using different layouts.
If I am not wrong clang/gcc are using the Itanium ABI, but I 
could be wrong.


https://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable

Maybe ask in the LDC forum as they follow Clang? In my opinion MI 
ought to be supported for binding, but... not sure if anyone has 
considered it.


Re: ugly and/or useless features in the language.

2021-05-23 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 23 May 2021 at 12:08:31 UTC, Alain De Vos wrote:
It is sufficient to have a bit complex gui and database access 
and the @safe annotation can nowhere be used in your program.

The compiler misses scopes checks without.


I think you are supposed to use @trusted to tell the compiler 
that the code is safe if you are certain that calling the code 
cannot create issues.


I dont use @safe myself, maybe someone else have better advice.






Re: ugly and/or useless features in the language.

2021-05-23 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 23 May 2021 at 10:24:53 UTC, Ola Fosheim Grostad wrote:

On Sunday, 23 May 2021 at 08:35:31 UTC, Tony wrote:
On Saturday, 15 May 2021 at 21:15:01 UTC, Ola Fosheim Grostad 
wrote:
Why is metaprogramming added features better than the same 
features added in the language? One is standard between 
entities, the other is not.


There are many reasons, one important one is consistency and 
being able to prove that the type system is sound/reasonable.


Syntactic sugar is ok though, so adding syntax does not create 
problems as long as what it expands to can be expressed 
constuctively using exising language constructs.


As an example D has a focus on memory safety, but in order to 
prove that property you have to prove that all possible 
combinations of language constructs (all valid programs) retains 
memory satety as a propery. The more distinct features you have, 
the more difficult it get because you get more and more 
combinations.


Re: ugly and/or useless features in the language.

2021-05-23 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 23 May 2021 at 08:35:31 UTC, Tony wrote:
On Saturday, 15 May 2021 at 21:15:01 UTC, Ola Fosheim Grostad 
wrote:
Why is metaprogramming added features better than the same 
features added in the language? One is standard between 
entities, the other is not.


There are many reasons, one important one is consistency and 
being able to prove that the type system is sound/reasonable.


Syntactic sugar is ok though, so adding syntax does not create 
problems as long as what it expands to can be expressed 
constuctively using exising language constructs.





Re: ugly and/or useless features in the language.

2021-05-22 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Saturday, 22 May 2021 at 17:32:34 UTC, sighoya wrote:
But I think providing an external ast tree mapped onto the 
changing internal one used by DMD would be a feasible approach.


It is feasible, but if you want to do it well you should think in 
terms of rewrite engines with patternmatching, think XSLT or Pure 
(not exactly, but in that direction).


I think it is better to design such a language from scratch.




Re: ugly and/or useless features in the language.

2021-05-17 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 16 May 2021 at 16:16:22 UTC, H. S. Teoh wrote:
I cannot live without auto return types and Voldemort types. 
They are my bread and butter. Take them away, and I might as 
well go back to C/C++.


C++ has both?


What I find ugly:
- shared, and all of its quirks and incomplete implementation.


Shared semantics are wrong, as in not safe. Someone with a 
theoretical background should have been consulted... I am not 
really sure why it was given semantics with no complete solution, 
you cannot evolve concurrency designs.


- The fact that byte + byte cannot be assigned back to a byte 
without a

  cast.


I dont think you should be able to do anything with bytes without 
a cast...


- Attribute proliferation.  We should have had type inference 
integrated
  into the language from the beginning, but alas, that ship has 
already

  long sailed and it's too late to change that now


Why is it too late? I dont think it is.






Re: ugly and/or useless features in the language.

2021-05-15 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Saturday, 15 May 2021 at 14:31:08 UTC, Alain De Vos wrote:

Feature creep can make your own code unreadable.


Having many ways to express the same concept makes code harder to 
read. This is an issue in C++, but you can combat it by creating 
coding norms.


In general it is better to have fewer features and instead 
improve metaprogramming so that missing features can be done in a 
library.


Also some features could be merged, like alias and enum constants.



Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-17 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Sunday, 18 April 2021 at 00:38:13 UTC, Ali Çehreli wrote:
I heard about safety issues around allowing full I/O during 
compilation but then the following points kind of convinced me:


- If I am compiling a program, my goal is to execute that 
program anyway. What difference does it make whether the 
program's compilation is harmful vs. the program itself.


I dont buy this, you can execute the code in a sandbox.

Compilation should be idempotent, writing to disk/databases 
during compilation breaks this guarantee.


I would not use a language that does not ensure this.

- If we don't allow file I/O during compilation, then the build 
system has to take that responsibility and can do the potential 
harm then anyway.


The build system is much smaller, so easier to inspect.




Re: styx, a programming languange written in D, is on the bootstrap path

2021-01-19 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Monday, 18 January 2021 at 17:58:22 UTC, Basile B. wrote:

On Monday, 18 January 2021 at 17:51:16 UTC, Basile B. wrote:

on the internet nobody knows you're a dog ;)


https://de.fakenamegenerator.com/


Awww... And here I thought you were a fellow Norwegian... But I 
guess a dog is ok too.


Re: Why many programmers don't like GC?

2021-01-17 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Monday, 18 January 2021 at 01:41:35 UTC, James Blachly wrote:
Those were not aberba's words, but the author of the first 
link, in which one does find a conceptual, high level 
description of GC.


I read it, it said nothing of relevance to the D collector. That 
is not TLDR informative.




Re: I want to create my own Tuple type

2021-01-10 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Monday, 11 January 2021 at 01:49:26 UTC, Paul Backus wrote:
Why are these particular implementation details important to 
you?


It is for object.d.

I want to allow fast runtime indexing if all elements are of the 
same type.


If the types are different I want static indexing, so the plan is 
to resolve failed lookup as __0 etc by modifying the compiler.


Re: Printing shortest decimal form of floating point number with Mir

2021-01-03 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Monday, 4 January 2021 at 04:37:22 UTC, 9il wrote:
I suppose the answer would be that D doesn't pretend to support 
all C++ template features and the bug is not a bug because we 
live with this somehow for years.


But it is a bug even if there was no C++... An alias should work 
by simple substitution, if it does not, then it is no alias...


I didn't believe it when I got a similar answer about IEEE 
floating-point numbers: D doesn't pertinent to be IEEE 754 
compatible language and the extended precision bug is declared 
to be a language feature. I suppose we shouldn't expect D to 
pretend to be a robust language for large business projects.


I think this is up to the compiler?



Re: C++ or D?

2020-12-31 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Thursday, 31 December 2020 at 09:57:01 UTC, Imperatorn wrote:
On Thursday, 31 December 2020 at 07:32:31 UTC, RSY wrote: 
nowhere. Just use D and be happy and let others use C++ and let 
them be happy. But they should be aware that C++ *as a 
language* has a long way to go before it gets all the features 
etc that D has. Maybe 2023, maybe 2027, who knows. Maybe that's 
fine for them, but not for me.


Which features are you most concerned about? I think the feature 
set is quite similar. D is less verbose and is easier to use for 
high level programming, but the clang++ and g++ have fewer bugs 
and quirks than the D compilers. The biggest difference is that 
C++ cannot change much, but D can!  D really ought to make more 
of that advantage... More streamlining even if it breaks stuff.





Re: C++ or D?

2020-12-30 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Thursday, 31 December 2020 at 07:17:45 UTC, RSY wrote:

It's like the story with the GC

You want everyone to like D because it has a GC despite it 
being not updated in ages, and proved to not scale well


Fun fact: the c++ GC Oilpan ( used in Chrome ) has more features 
than the one in D...




Re: Mir vs. Numpy: Reworked!

2020-12-07 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote:
On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad 
wrote:

On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:

[snip]

"no need to calculate inverse matrix" What? Since when?


I dont know what he meant in this context, but a common 
technique in computer graphics is to build the inverse as as 
you apply computations.


Ah, well if you have a small matrix, then it's not so hard to 
calculate the inverse anyway.


It is an optimization, maybe also for accuracy, dunno.
So, instead of ending up with a transform from coordinate system 
A to B, you also get the transform from B to A for cheap. This 
may matter when the next step is to go from B to C... And so on...


Re: Mir vs. Numpy: Reworked!

2020-12-07 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote:
On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin 
wrote:

[snip]

Agreed. As a matter of fact the simplest convolutions of 
tensors are out of date. It is like there's no need to 
calculate inverse matrix. Mir is the usefull work for author, 
of course, and practically almost not used. Every one who 
needs something fast in his own tasks should make same things 
again in D.


"no need to calculate inverse matrix" What? Since when?


I dont know what he meant in this context, but a common technique 
in computer graphics is to build the inverse as as you apply 
computations.


Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 17:28:52 UTC, Bruce Carneal wrote:
D is good for systems level work but that's not all.  I use it 
for projects where, in the past, I'd have split the work 
between two languages (Python and C/C++).  I much prefer 
working with a single language that spans the problem space.


My impression from reading the forums is that people either use D 
as a replacement for C/C++ or Python/numpy, so I think your 
experience covers the essential use case scenario that is 
dominating current D usage? Any improvements have to improve both 
dimension, I agree.


If there is a way to extend D's reach with zero or a near-zero 
complexity increase as seen by the programmer, I believe we 
should (as/when resources allow of course).


ARC involves a complexity increase, to some extent. Library 
authors have to think a bit more principled about when objects 
should be phased out and destructed, which I think tend to lead 
to better programs. It would also allow for faster precise 
collection. So it could be beneficial for all.




Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 17:35:19 UTC, IGotD- wrote:
Is automatic atomic reference counting a contender for kernels? 
In kernels you want to reduce the increase/decrease of the 
counts. Therefore the Rust approach using 'clone' is better 
unless there is some optimizer that can figure it out. 
Performance is important in kernels, you don't want the kernel 
to steal useful CPU time that otherwise should go to programs.


I am not sure if kernel authors want autmatic memory management, 
they tend to want full control and transparency. Maybe something 
people who write device drivers would consider.


In general I think that reference counting should be supported 
in D, not only implicitly but also under the hood with fat 
pointers. This will make D more attractive to performance 
applications. Another advantage is the reference counting can 
use malloc/free directly if needed without any complicated GC 
layer with associated meta data.


Yes, I would like to see it, just expect that there will be 
protests when people realize that they have to make ownership 
explicit.


Also tracing GC in a kernel is my opinion not desirable. For 
the reason I previously mentioned, you want to reduce meta 
data, you want reduce CPU time, you want to reduce 
fragmentation. Special allocators for structures are often used.


Yes, an ARC solution should support fixed size allocators for 
types that are frequently allocated to get better speed.





Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 14:44:25 UTC, Paulo Pinto wrote:
And while on the subject of low level programming in JVM or 
.NET.


https://www.infoq.com/news/2020/12/net-5-runtime-improvements/


Didnt say anything about low level, only simd intrinsics, which 
isnt really low level?


It also stated "When it came to something that is pure CPU raw 
computation doing nothing but number crunching, in general, you 
can still eke out better performance if you really focus on 
"pedal to the metal" with your C/C++ code."


So it is more of a Go contender, and Go is not a systems level 
language... Apples and oranges.


As I already mentioned in another thread, rebooting the 
language to pull in imaginary crowds will only do more damage 
than good, while the ones deemed unusable by the same imaginary 
crowd just keep winning market share, slowly and steady, even 
if takes yet another couple of years.


A fair number of people here are in that imaginary crowd.
So, I guess it isnt imaginary...


Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 14:11:41 UTC, Max Haughton wrote:
On Sunday, 6 December 2020 at 11:35:17 UTC, Ola Fosheim Grostad 
wrote:

On Sunday, 6 December 2020 at 11:27:39 UTC, Max Haughton wrote:

[...]


No, unique doesnt need indirection, neither does ARC, we put 
the ref count at a negative offset.


shared_ptr is a fat pointer with the ref count as a separate 
object to support existing C libraries, and make weak_ptr easy 
to implement. But no need for indirection.



[...]


I think you need a new IR, but it does not have to be used for 
code gen, it can point back to the ast nodes that represent 
ARC pointer assignments.


One could probably translate the one used in Rust, even.


https://gcc.godbolt.org/z/bnbMeY


If you pass something as a parameter then there may or may not be 
an extra reference involved. Not specific for smart pointers, but 
ARC optimization should take care of that.




Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 12:58:44 UTC, IGotD- wrote:
I was thinking about how to deal with this in D and the 
question is if it would be better to be able to control move as 
default per type basis. This way we can implement Rust style 
reference counting without intruding too much on the rest of 
the language. The question is if we want this or if we should 
go for a fully automated approach where the programmer doesn't 
need to worry about 'clone'.


I dont know, but I suspect that people that use D want something 
more high level than Rust? But I dont use Rust, so...




Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 11:27:39 UTC, Max Haughton wrote:
ARC with a library will have overhead unless the compiler/ABI 
is changed e.g. unique_ptr in C++ has an indirection.


No, unique doesnt need indirection, neither does ARC, we put the 
ref count at a negative offset.


shared_ptr is a fat pointer with the ref count as a separate 
object to support existing C libraries, and make weak_ptr easy to 
implement. But no need for indirection.


The AST effectively is a high-level IR. Not a good one, but 
good enough. The system Walter has built shows the means are 
there in the compiler already.


I think you need a new IR, but it does not have to be used for 
code gen, it can point back to the ast nodes that represent ARC 
pointer assignments.


One could probably translate the one used in Rust, even.


Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 10:44:39 UTC, Max Haughton wrote:
On Sunday, 6 December 2020 at 05:29:37 UTC, Ola Fosheim Grostad 
wrote:
It has to be either some kind of heavily customisable small GC 
(i.e. with our resources the GC cannot please everyone), or 
arc. The GC as it is just hurts the language.


Realistically, we probably need some kind of working group or 
at least serious discussion to really narrow down where to go 
in the future. The GC as it is now must go, we need borrowing 
to work with more than just pointers, etc.


The issue is that it can't just be done incrementally, it needs 
to be specified beforehand.


ARC can be done incrementally, we can do it as a library first 
and use a modified version existing GC for detecting failed 
borrows at runtime during testing.


But all libraries that use owning pointers need ownership to be 
made explicit.


A static borrow checker an ARC optimizer needs a high level IR 
though. A lot of work though.






Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn
On Sunday, 6 December 2020 at 08:59:49 UTC, Ola Fosheim Grostad 
wrote:
Well, you could in theory avoid putting owning pointers on the 
stack/globals or require that they are registered as gc roots. 
Then you don't have to scan the stack. All you need then is 
write barriers. IIRC


Abd read barriers... I assume. However with single threaded 
incremental, write barriers should be enough.


Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 08:36:49 UTC, Bruce Carneal wrote:
Yes, but they don't allow low level programming. Go also 
freeze to sync threads this has a rather profound impact on 
code generation. They have spent a lot of effort on  sync 
instructions in code gen to lower the latency AFAIK.


So, much of the difficulty in bringing low-latency GC to dlang 
would be the large code gen changes required.  If it is a 
really big effort then that is all we need to know.  Not worth 
it until we can see a big payoff and have more resources.


Well, you could in theory avoid putting owning pointers on the 
stack/globals or require that they are registered as gc roots. 
Then you don't have to scan the stack. All you need then is write 
barriers. IIRC







Re: low-latency GC

2020-12-06 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 07:45:17 UTC, Bruce Carneal wrote:
GCs scan memory, sure.  Lots of variations.  Not germane.  Not 
a rationale.


We need to freeze the threads when collecting stacks/globals.

D is employed at multiple "levels".  Whatever level you call 
it, Go and modern JVMs employ low latency GCs in multi-threaded 
environments.  Some people would like to use D at that "level".


Yes, but they don't allow low level programming. Go also freeze 
to sync threads this has a rather profound impact on code 
generation. They have spent a lot of effort on  sync instructions 
in code gen to lower the latency AFAIK.


My question remains: how difficult would it be to bring such 
technology to D as a GC option?  Is it precluded somehow by the 
language?   Is it doable but quite a lot of effort because ...?
 Is it no big deal once you have the GC itself because you only 
need xyz hooks? Is it ...?


Get rid of the system stack and globals. Use only closures and 
put in a restrictive memory model. Then maybe you can get a fully 
no freeze multi threaded GC.  That would be a different language.


Also, I think Walter may have been concerned about read barrier 
overhead but, again, I'm looking for feasibility information.  
What would it take to get something that we could compare?


Just add ARC + single threaded GC. And even that is quite 
expensive.





Re: low-latency GC

2020-12-05 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 05:41:05 UTC, Bruce Carneal wrote:
OK.  Some rationale?  Do you, for example, believe that 
no-probable-dlanger could benefit from a low-latency GC?  That 
it is too hard to implement?  That the language is somehow 
incompatible? That ...


The GC needs to scan all the affected call stacks before it can 
do incremental collection. Multi threaded GC is generally not 
compatible with low level programming.





Re: low-latency GC

2020-12-05 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 6 December 2020 at 05:16:26 UTC, Bruce Carneal wrote:
How difficult would it be to add a, selectable, low-latency GC 
to dlang?


Is it closer to "we cant get there from here" or "no big deal 
if you already have the low-latency GC in hand"?


I've heard Walter mention performance issues (write barriers 
IIRC).  I'm also interested in the GC-flavor performance trade 
offs but here I'm just asking about feasibility.


The only reasonable option for D is single threaded GC or ARC.






Re: Questions about D

2020-11-27 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Friday, 27 November 2020 at 19:56:38 UTC, Walter wrote:
On Friday, 27 November 2020 at 19:46:52 UTC, Ola Fosheim 
Grostad wrote:

Why not? What are you looking for?

I'm looking for a general purpose which I can use everywhere


It is fairly general, but I don't think it is the best option for 
targeting 16 bit or 8 bit cpus. In that case C would probably be 
better.


Re: Questions about D

2020-11-27 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Friday, 27 November 2020 at 19:34:41 UTC, Walter wrote:

Hi, I have some questions to ask regarding D:

1) Should I learn D ?


Why not? What are you looking for?


2) Can I cross-compile D programs?


You should be able to with ldc/gdc if you have some experience.


3) Is it a low-level language?


Lower than high, higher than low. It is if you want it to be.


4) Which type of applications is D most used in?


Good question. I would assume command line applications, batch. 
Like C++.



5) Is it fast and not bloated with useless features?


It is fast, but I think you'll find one or two useless features 
if you go looking for it.




Re: const and immutable values, D vs C++?

2019-12-04 Thread Ola Fosheim Grostad via Digitalmars-d-learn
On Thursday, 5 December 2019 at 00:05:26 UTC, Ola Fosheim Grøstad 
wrote:
On Wednesday, 4 December 2019 at 23:27:49 UTC, Steven 
Schveighoffer wrote:

void main()
{
foo!a(); // const(int)
foo!b(); // immutable(int)
foo!c(); // const(int)
}


Ok, so one has to use a wrapper and then "catch" the result 
with auto?


auto x = foo!f();


Nevermind...


Re: wiki: D on AVR

2019-11-28 Thread Ola Fosheim Grostad via Digitalmars-d-announce
On Thursday, 28 November 2019 at 18:40:17 UTC, Ernesto 
Castellotti wrote:

Yes LDC sets size_t for the platform, not violating the spec.
int in D is 32-bit as you said, that if you compare it with the 
size of the types of AVR-GCC it would be long,
This is not a problem, just use the type aliases like those in 
core.stdc.stdint to work around


Doesn't D promote all arithmetic operations to 32bit even if the 
operands are 16 bit?





Re: Quora: Why hasn't D started to replace C++?

2018-02-10 Thread Ola Fosheim Grostad via Digitalmars-d
On Sunday, 11 February 2018 at 00:06:07 UTC, psychoticRabbit 
wrote:
On Sunday, 11 February 2018 at 00:03:16 UTC, psychoticRabbit 
wrote:
On Tuesday, 30 January 2018 at 20:45:44 UTC, Andrei 
Alexandrescu wrote:

https://www.quora.com/Why-hasnt-D-started-to-replace-C++

Andrei


Why indeed!

Feature D C C++ C# Java (and this was bacvk in
=


(correction to above line)
Feature D C C++ C# Java (and this was back in 2003)

..2003 ..gee.. that was what..15 years ago.


Well, it isn't correct in 2018...



Re: How programmers transition between languages

2018-01-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 27 January 2018 at 13:56:35 UTC, rjframe wrote:
If you use an IDE or analysis/lint tool, you'll get type 
checking. The interpreter will happily ignore those annotations.


You need to use a type checker to get type checking... No 
surprise there, but without standard type annotations the type 
checker isn't all that useful.  Only in past few years have 
typing stubs become available for libraries, and that makes a 
difference,


Re: Old Quora post: D vs Go vs Rust by Andrei Alexandrescu

2018-01-04 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 4 January 2018 at 19:04:36 UTC, jmh530 wrote:
Pony relates to Rust in terms of what they are trying to 
accomplish with ownership. Pony's iso reference capability 
seems to mirror Rust's borrow checker rule that you can only 
have one mutable reference.


But Rust isn't using garbage collection...




Re: We're looking for a Software Developer! (D language)

2017-11-29 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Wednesday, 29 November 2017 at 15:11:17 UTC, Paulo Pinto wrote:
Wirth puts it nicely, it is all about algorithms, data 
structures and

learning how to apply them to any language.


Yes, they also mention machine learning, which borrows from many 
fields close to applied mathematics. Linear algebra, statistical 
signal processing, statistical modelling, etc... I took a course 
on statistical signal processing this year (using Hayes book + 
extras) and experience without theoretical training would be 
inefficient. You have to tailor the algorithms to the 
characteristics in the signal...





Re: Thoughts about D

2017-11-29 Thread Ola Fosheim Grostad via Digitalmars-d
On Thursday, 30 November 2017 at 03:29:56 UTC, Walter Bright 
wrote:
The code *size* causes problems because it pushes the executing 
code out of the cache.


Not if you do a branch to a cold cacheline on assert failure.



Re: We're looking for a Software Developer! (D language)

2017-11-29 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Wednesday, 29 November 2017 at 10:47:31 UTC, aberba wrote:
to death learning these stuff in lectures. I learnt them beyond 
the syllables years back on my own at a much quicker pase.


CS isnt about the languages themselves, that is trivial. 
Basically covered in the first or second semester.


You become experienced and skilled when you're passionate about 
it.


Sure, imperative languages are all mostly the same, and easy to 
learn once you know the basics (C++ being an exception).  
Learning frameworks takes time, but there are too many frameworks 
for anyone to master, and they are quickly outdated.


So the only knowledgebase that isnt getting outdated are the 
models from CS.





Re: Thoughts about D

2017-11-28 Thread Ola Fosheim Grostad via Digitalmars-d

On Tuesday, 28 November 2017 at 06:58:58 UTC, Elronnd wrote:
In that case, why is libstdc++ 12MB, while libphobos2 is half 
the size, at 5.5MB?


I havent checked, if true then probably because it contains code 
that goes beyond the minimal requirements (legacy, bloat, 
portability, tuning, etc). Phobos contain more application 
oriented APIs than C++17.




Re: Thoughts about D

2017-11-28 Thread Ola Fosheim Grostad via Digitalmars-d
On Tuesday, 28 November 2017 at 02:26:34 UTC, Neia Neutuladh 
wrote:
On Monday, 27 November 2017 at 17:35:53 UTC, Ola Fosheim 
Grostad wrote:
On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh 
wrote:
I last used C++ professionally in 2015, and we were still 
rolling out C++11. std::string_view is part of C++17. You're 
calling me stupid for not having already known about it. 
(Yes, yes, you were sufficiently indirect to have a fig leaf 
of deniability.)


I'n not talking about you obviously. I am talking about using 
languages stupidly...


You wrote "std::string does the same thing. So if I reimplemented 
subtex naively in C++, its performance would be closer to the C# 
version than to the D version."


"Naively" would mean that you didnt know better or that an 
alternative would be complex, but later on you acknowledged that 
doing it with slices would be better, but that you could not be 
bothered.  So you know better, but would rather choose to do it 
stupedly...


I have never said that you are stupid, what I said was the 
equivalent of  "std::string does the same thing. So if I 
reimplemented subtex stupidly in C++, its performance would be 
closer to the C# version than to the D version."


That line of reasoning is silly. I know that you know better, 
because you clearly stated so in the post I responded to.



allocating memory isn't slow simply because it requires 
executing a large number of instructions.


Thats debatable...






Re: Precise GC state

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 27 November 2017 at 18:32:39 UTC, Ola Fosheim Grøstad 
wrote:

You get this:

shared_ptr -> control_block -> object



Actually, seems like the common implementation uses 16 bytes, so 
that it has a direct pointer as well. So twice the size of 
unique_ptr.





Re: Thoughts about D

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh wrote:
I last used C++ professionally in 2015, and we were still 
rolling out C++11. std::string_view is part of C++17. You're 
calling me stupid for not having already known about it. (Yes, 
yes, you were sufficiently indirect to have a fig leaf of 
deniability.)


I'n not talking about you obviously. I am talking about using 
languages stupidly... You could use GSL string_span or the array 
version span, or write your own in 20 minutes. These are not 
language constructs, but library constructs, so they dont speak 
to the efficiency of the language...


An efficient text parser doesn't seem like a sufficiently 
unusual task that it should require you to create your own 
string type. A large swath of programs will use at least one 
text parser.


C++ requires you to write basically most things from scratch or 
use external libraries... What ships with it is very rudimentary. 
There are many parser libraries available.


C++ is very much batteries not included... Which is good for low 
level programming.


It is often useful to talk about real-world workloads when 
discussing performance.


Well, in that case Java was sufficiently fast, so all languages 
came out the same...


If we talk about language performance the we need use a different 
approach. If we do a direct translation from lang A to B, then we 
essentially give A an advantage.


So that methodology is flawed.

Assuming that your CPU can execute 20 billion instuctions per 
second.  That means 1 billion per 50 ms, so your budget is 1 
million instructions on 400 bytes? Doesnt that suggest that the 
program is far from optimal or that most of the time is spent on 
something else?


Anyway benchmarking different languages isnt easy, so failing at 
doing it well is usual... It is basically very difficult to do 
convincingly.




Re: Precise GC state

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d
Btw, it would improve the discourse if people tried to 
distinguish between language constructs and library constructs...


Re: Precise GC state

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 27 November 2017 at 14:35:03 UTC, Dmitry Olshansky 
wrote:
Then watch Herb’s Sutter recent talk “Leak freedom by default”. 
Now THAT guy must be out of his mind :)


He could be, I havent seen it... Shared_ptr isnt frequently used, 
it is a last resort,


atomic_shared_pointer is nothing but what you seem to imply. 
It’s not manual sync for one.


Err... That was my point... Only assignment and reset is 
protected in shared_ptr, all other methods require manual sync.


You keep spreading FUD on this forum, I’m still not sure of 
your reasons though.


And there we go ad hominem as usual... With no argument to back 
it up... Bye.





Re: Precise GC state

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Monday, 27 November 2017 at 10:13:41 UTC, codephantom wrote:
But in a discussion about GC, some technical details might 
prove to be very useful to those of us following this 
discussion.


Precise scanning of pointers makes sense when you have many 
cachelines on the GC with no pointers in them. But if you mostly 
have pointers (a large graph or a tree) then it makes little 
difference.


You need to add a more extensive whole program type analysis 
where you prove that the GC memory heap isnt reachable from a 
type... I.e. "pointers reachable through class T can provably 
never point into the a GC heap in this specific program, so 
therefore we can ignore all pointers to T".





Re: Precise GC state

2017-11-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Monday, 27 November 2017 at 09:38:52 UTC, Temtaime wrote:

Please stop this flame


There is no flaming.

Current GC in D is shit and all this speaking won't improve 
situation.


If so, why are you here? But you are fundamentally wrong. Precise 
GC will not bring a general improvement, for that you need 
advanced pointer analysis. So you need a change of philosophy to 
get a performant GC: semantic changes.





Re: Precise GC state

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 27 November 2017 at 06:59:30 UTC, Petar Kirov 
[ZombineDev] wrote:
the shared_ptr itself) and you can't opt out of that even if 
you're not sharing the shared_ptr with other threads.


Well, the compiler can in theory ellide atomics if it csn prove 
that the memory cannot be accessed by another thread.


But it kinda is missing the point that if it only is in a single 
thread then it would typically only have only one assignment. 
Shared_ptr is for holding a resource not for using it...




Re: Precise GC state

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 27 November 2017 at 06:47:00 UTC, Dmitry Olshansky 
wrote:
Last time I check shared_ptr can be safely shared across 
threads, hence RC is takling synchronization and most likely 
atomics since locks won’t be any better.


The controlblock can, but it is crazy to use shared_ptr for 
anything more than high level ownership. It is a general solution 
with weak pointers and extra indirection, not a typical RC 
implementation for datastructures.



In C++ sync is manual, which is the only efficient way to do


??? shared_ptr is nowhere manual.


There is an upcoming atomic_shared_ptr, but it is not in the 
standard yet.


My post is about particular primitive in C++ std, what could be 
done instead or in addition to is not important.


Oh, but it is.

1. D currently does not provide what you says it does.

2. Sane C++ programmers rarely use shared_ptr for more than 
exchsnging ownership (suitable for sharing things like bitmap 
textures). There are plenty of other RC implementations for 
tracking memory. So you compare apples and oranges.






Re: Precise GC state

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 27 November 2017 at 05:47:49 UTC, Dmitry Olshansky 
wrote:
likely via RAII. Not to mention cheap (thread-local) Ref 
Counting, C++ and many other language have to use atomics which 
makes RC costly.


No, you dont. Nobody in their right mind would do so in C++ as a 
general solution. Seems there is trend in doing D-advocacy based 
on the assumption that programmers using other languages are 
crazy these days.


In C++ sync is manual, which is the only efficient way to do it. 
Proving correctness for an efficient general solution is an 
unsolved theoretical problem. You can do it for high level 
mechanisms, but not low level atm.


Rust and Pony claims to have solutions, but they are not general. 
D most certainly does not have it and never will.


When threading is a libray type then you cannot achieve more in D 
than you can achieve in C++, i.e. Shared is not going to do more 
than a C++ library type with a separate static analysis tool.


Re: Thoughts about D

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d

On Monday, 27 November 2017 at 05:11:06 UTC, Neia Neutuladh wrote:
You might say that I could use C++ style manual memory 
management and get even better performance. And you'd be wrong.


No... Not if you do it right, but it takes more planning. I.e. 
Design. Which is why scripting and high level languages dont use 
it.


std::string does the same thing. So if I reimplemented subtex 
naively in C++, its performance would be closer to the C# 
version than to the D version.


You meant stupidly, you would rather use std::string_view for 
string references in C++. std::string is a library convenience 
type that typically is only used for debugging and filenames. If 
you want performance then it really isnt possible to make do with 
a fixed library type for strings so in a realistic program people 
would write their own.


I could probably get slightly better performance than the D 
version by writing a special `stringslice` struct. But that's a 
lot of work,


No...

On the whole, it sounds like you don't like D because it's not 
C++. Which is fine, but D isn't going to become C++.


Sure. But maybe you shouldn't use a tiny 400k input when 
discussing performance. Try to think about how many instructions 
a CPU executes in 50ms...


If you dont know C++ then it makes no sense for you to compare 
performance to C++.




Re: Precise GC state

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d
On Sunday, 26 November 2017 at 19:11:08 UTC, Jonathan M Davis 
wrote:
We can't even have different heaps for immutable and mutable 
stuff, because it's very common to construct something as 
mutable and then cast it to immutable (either explicitly or


This is easy to fix, introduce a uniquely owned type (isolated) 
that only can transition to immutable.


So it is more about being willing to tighten up the semantics. 
Same thing with GC, but everything has a cost.


That said Adam has a point with getting more users, it isnt 
obvious that the costs wouldnt be offset by increased interest. 
Anyway, it seems like C# and Swift are pursuing the domain D is 
in by gradually expanding into more performance oriented 
programming mechanisms... We'll see.


Re: Precise GC state

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d
On Sunday, 26 November 2017 at 08:49:42 UTC, Dmitry Olshansky 
wrote:
Sadly you can’t “skip” write barriers in your @system code 
because it may run as part of larger @safe. Which is where they


Well, you can if you carefully lock the gc runtime or if you dont 
modify existing scannable pointers that points to existing 
objects (e.g. You could fill an empty array of pointers in unsafe 
code or pointers that point to something unscannable.), but all 
unsafe code would need vetting.


So it isnt impossible technically, but it is impossible without a 
change of philosophy.


Re: .NET introduces Span, basically D slices

2017-11-26 Thread Ola Fosheim Grostad via Digitalmars-d

On Sunday, 26 November 2017 at 05:36:15 UTC, Guy wrote:
It's funny you say that because they just announced the 
introduction of ranges and I believe they return Spans.


Well, basic dataflow pipelines with implicit transfer of buffer 
ownership. So it is a language feature with implicit RAII 
lifetime management, which is why Span is limited to the stack. 
Then they have a counterpart to Span called Memory that can be 
stored on th GC heap.


A reasonable tradeoff, but a general constraint would have been 
more interesting.


Re: Homework services

2017-11-25 Thread Ola Fosheim Grostad via Digitalmars-d
On Saturday, 25 November 2017 at 14:29:08 UTC, Steven 
Schveighoffer wrote:
If you're looking to use D for web development, this is what I 
recommend:


SEO...spam...




Re: [OT] Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?

2017-11-24 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 25 November 2017 at 01:23:03 UTC, codephantom wrote:
And thankyou. This a much more constructive option for users 
that disagree with something I say. i.e. Now they can just hide 
me, instead of attacking me.


Dont worry, both Walter and Andrei have done far worse in these 
fora over the years than you do... Or "forums" as the English 
quite incorrectly spells it.


I'll give that a go next time.. otherwise people will start 
wanting the forum to implement a spell checker...and a thesuras 
(how do you spell that anyway??).


Thesauri ?


Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?

2017-11-22 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 23 November 2017 at 01:16:59 UTC, codephantom wrote:

That's why we have the concept of 'undefined behaviour'.


Errr, no.  High level programming languages don't have undefined 
behaviour. That is a C concept related to the performance of the 
executable. C tries to get as close to machine language as 
possible.


Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?

2017-11-22 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 23 November 2017 at 01:33:39 UTC, codephantom wrote:
On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim 
Grostad wrote:

By what proof? And what do you mean by mathematics?


A mathematical claim, that cannot be proven or disproven, is 
neither true or false.


What you are left with, is just a possibility.


And how is this a problem? If your program relies upon the 
unbounded version you will have to introduce it explicitky as an 
axiom. But you dont have to, you can use bounded quantifiers.


What you seem to be saying is that one should accept all unproven 
statements as axioms implicitly. Why have a type system at all 
then?


Thus, it will always remain an open question as to whether the 
conjecture is true, or not.


Heh, has the Goldbach conjecture been proven undecidable?




Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?

2017-11-22 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 23 November 2017 at 00:06:49 UTC, codephantom wrote:
true up to a number < n  ... does not address the conjecture 
correctly.


So what? We only need to a proof up to N for regular programming, 
if at all.



hint. It's not a problem that mathmatics can solve.


By what proof? And what do you mean by mathematics?




Re: Introducing Nullable Reference Types in C#. Is there hope for D, too?

2017-11-21 Thread Ola Fosheim Grostad via Digitalmars-d

On Tuesday, 21 November 2017 at 06:03:33 UTC, Meta wrote:
I'm not clear on whether he means that Java's type system is 
unsound, or that the type checking algorithm is unsound. From 
what I can tell, he's asserting the former but describing the 
latter.


He claims that type systems with existential rules, hierarchical 
relations between types and null can potentially be unsound. His 
complaint is that if Java had been correctly implemented to the 
letter of the spec then this issue could have led to heap 
corruption if exploited by a malicious programmer.


Runtime checks are part of the type system though, so it isn't 
unsound as implemented as generated JVM does runtime type checks 
upon assignment.


AFAIK the complaint assumes that information from generic 
constraints isn't kept on a separate level.


It is a worst case analysis of the spec...




Re: ESR on post-C landscape

2017-11-16 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Friday, 17 November 2017 at 00:36:21 UTC, codephantom wrote:
On Thursday, 16 November 2017 at 11:52:45 UTC, Ola Fosheim 
Grostad wrote:


Uhm, no? What do you mean by 'primary focus of program design' 
and in which context?




I the context that, this is specifically what Stroustrup says 
in his book (The Design and Evolution of C++ 1994)


"Simula's class concept was seen as the key difference, and 
ever since I have seen classes as the proper primary focus of 
program design." (chp1, page 20)


Yes, that is reasonable, it is very useful when made available in 
a generic form. I believe Simula was used in teaching at his 
university.


A class in Simula is essentially a record, library scope, 
block-scope, coroutines, with inheritance and virtual functions, 
implemented as a closure where the body acts as an extensible 
constructor. So it is a foundational primitive. Nygaard and Dahl 
got the Turing award for their work. Ole-Johan Dahl was also a 
coauthor of an influental book on structured programming which 
had a chapter on it IIRC. Nygaard and others in Denmark later in 
the 70s and 80s refined the class concept into a unifying concept 
that was essentially the only primary building block in Beta 
(called pattern, which allows functions to be extended using 
inheritance, instantiation of objects from virtual patterns, type 
variables as members etc).


So Beta establish that you don't need other structural mechanisms 
than a powerful class concept + tuples for parameters.


Self establish the same with objects.

Freud would tell us, that Stroustups obssesion with Simula, is 
where it all began.


Anyone that cares already know that Simula was an ancestor for 
C++, Smalltalk, Java and many other OO languages... But 
Stroustrup wasn't obsessed by Simula, if he was he would have 
added things like coroutines, local functions, used the class as 
a module scope etc. He also would have avoided multiple 
inheritance.




Re: ESR on post-C landscape

2017-11-16 Thread Ola Fosheim Grostad via Digitalmars-d-learn
On Thursday, 16 November 2017 at 18:02:10 UTC, Patrick Schluter 
wrote:
The shear amount of inscrutable cruft and rules, plus the 
moving target of continuously changing semantics an order or 
two of magnitude bigger than C added to the fact that you still 
need to know C's gotchas, makes it one or two order of 
magnitude more difficult to mental model the hardware.


I don't feel that way, most of what C++ adds to C happens on a 
typesystem or textual level. The core language is similar to C.



Even worse in C++ with its changing standards ever 5 years.


But those features are mostly short hand for things that already 
are in the language. E.g. lambdas are just objects, move 
semantics is just an additional nominal ref type with barely any 
semantics attached to it (some rules for coercion to regular 
references)...


So while these things make a difference, it doesn't change my low 
level mental model of C++, which remain as close to C today as it 
did in the 90s.


Re: ESR on post-C landscape

2017-11-16 Thread Ola Fosheim Grostad via Digitalmars-d-learn
On Thursday, 16 November 2017 at 18:06:22 UTC, Patrick Schluter 
wrote:
On Tuesday, 14 November 2017 at 16:38:58 UTC, Ola Fosheim 
Grostad wrote:
changing. C no longer models the hardware in a reasonable 
manner.


Because of the flawed interpretation of UB by the compiler 
writers, not because of a property of the language itself.


No, I am talking about the actual hardware, not UB. In the 80s 
there was almost 1-to-1 correspondence between C and CPU 
internals. CPUs are still designed for C, but the more code shift 
away from C, the more rewarding it will be for hardware designers 
to move to more parallell designs.


Re: ESR on post-C landscape

2017-11-16 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Thursday, 16 November 2017 at 11:24:09 UTC, codephantom wrote:
On Thursday, 16 November 2017 at 06:35:30 UTC, Ola Fosheim 
Grostad wrote:
Yes, I agree that classes are a powerful modelling primitive, 
but my point was that Stroustrup made classes the 'primary 
focus of program design'. Yes, that made it more uniform 
alright... uniformly more complicated. And why? Because he went 
on to throw C into the mix, because performance in Simula was 
so poor, and would not scale. C promised the efficiency and 
scalability he was after. But an efficient and scalable 'class 
oriented' language, means complexity was inevitable.


Nah, he is just making excuses. Simula wasn't particularly slow 
as a design, but used a GC similar to the one in D and bounds 
checks on arrays, like D. C++ was just a simple layer over C and 
evolved from that. Had nothing to do with language design, but 
was all about cheap implementation. Initial version of C++ was 
cheap and easy to do.


I would never say OO itself is a failure. But the idea that is 
should be the 'primary focus of program design' .. I think that 
is a failure...and I think that principle is generally accepted 
these days.


Uhm, no? What do you mean by 'primary focus of program design' 
and in which context?


If the next C++ doesn't get modules, that'll be the end of 
it...for sure.


I like namespaces. Flat is generally better when you want 
explicit qualifications.


Yeah..but into what? It's all those furry gopher toys, 
t-shirts, and playful colors.. I think that's what's attracting 
people to Go. Google is the master of advertising afterall. 
Would work well in a kindergarten. But it makes me want to 
puke. It's so fake.


It is the runtime and standard library. And stability. Nice for 
smaller web services.


correct the past. They should be focused on the future. They 
should have got some experienced younger programmers at google 
to design a language instead. I bet it wouldn't look anything 
like Go.


Go isnt exciting and has some short-comings that is surprising, 
but they managed to reach a stable state, which is desirable when 
writing server code. It is this stability that has ensured that 
they could improve on the runtime. ("experienced young 
programmers" is a rather contradictory term, btw :-)





Re: ESR on post-C landscape

2017-11-15 Thread Ola Fosheim Grostad via Digitalmars-d-learn
On Thursday, 16 November 2017 at 06:51:58 UTC, rikki cattermole 
wrote:

On 16/11/2017 6:35 AM, Ola Fosheim Grostad wrote:
Thing is, it is a failure, the way most people use it.


You can say that about most things: exceptions, arrays, pointers, 
memory, structs with public fields... But I guess what you are 
saying is that many people arent good at modelling...



When used correctly it is a very nice additive to any code base.
It just can't be the only one.


Well, it can in a flexible OO language (niche languages). However 
it was never meant to be used out of context, i.e not meant to be 
used for "pure math".


Re: ESR on post-C landscape

2017-11-15 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Thursday, 16 November 2017 at 02:12:10 UTC, codephantom wrote:
Perhaps the mistake C++ made, was concluding that 'classes' 
were the "proper primary focus of program design" (chp1. The 
Design and Evolution of C++).


No, classes is a powerful modelling primitive. C++ got that 
right. C++ is also fairly uniform because of it. Not as uniform 
as Self and Beta, but more so than D.


People who harp about how OO is a failure don't know how to do 
real world modelling...


I have to wonder whether that conclusion sparked the inevitable 
demise of C++.


There is no demise...

Eric should be asking a similar question about Go ..what 
decision has been made that sparked Go's inevitable demise - or 
in the case of Go, decision would be decisions.


Go is growing...


a := b


A practical shorthand, if you dont like it, then dont use it.




Re: ESR on post-C landscape

2017-11-15 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Wednesday, 15 November 2017 at 10:40:50 UTC, codephantom wrote:
On Wednesday, 15 November 2017 at 09:26:49 UTC, Ola Fosheim 
Grøstad wrote:


I don't think Go is much affected by the corporate…


Umm

"We made the language to help make google more productive and 
helpful internally" - Rob Pike


I know, I followed the debate for a while, but that sounds much 
more like a defence of their own minimalistic aesthetics (which 
they don't deny) than a corporate requirement. With a different 
team Go most certainly would have exceptions and generics, like 
Dart. Makes no sense to claim that their server programmers are 
less skilled than their front end programmers?


Re: ESR on post-C landscape

2017-11-14 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Tuesday, 14 November 2017 at 11:55:17 UTC, codephantom wrote:
The reason he can dismiss D, so easily, is because of his 
starting premise that C is flawed. As soon as you begin with 
that premise, you justify searching for C's replacement, which 
makes it difficult to envsion something like D.


Well, in another thread he talked about the Tango split, so not 
sure where he is coming from.


That's why we got C++, instead of D. Because the starting point 
for C++, was the idea that C was flawed.


No, the starting point for C++ was that Simula is better for a 
specific kind of modelling than C.



C is not flawed. It doesn't need a new language to replace it.


It is flawed... ESR got that right, not sure how anyone can 
disagree. The only thing C has going for it is that CPU designs 
have been adapted to C for decades. But that is changing. C no 
longer models the hardware in a reasonable manner.


If that was the starting point for Go and Rust, then it is ill 
conceived.


It wasn't really. The startingpoint for Go was just as much a 
language used to implement Plan 9. Don't know about Rust, but it 
looks like a ML spinoff.


One should also not make the same error, by starting with the 
premise that we need a simpler language to replace the 
complexity of the C++ language.


Why not? Much of the evolved complexity of C++ can be removed by 
streamlining.


If that was the starting point for Go and Rust, then it is ill 
conceived.


It was the starting point for D...

What we need, is a language that provides you with the 
flexibility to model your solution to a problem, *as you see 
fit*.


If that were my starting point, then it's unlikely I'd end up 
designing Go or Rust. Only something like D can result from 
that starting point.


Or C++, or ML, or BETA, or Scala, or etc etc...

Because then, it's unlikely he would get away with being so 
dismissive of D.


If he is dismissive of C++ and Rust then he most likely will 
remain dismissive od D as well?





Re: Actor model & D

2017-11-11 Thread Ola Fosheim Grostad via Digitalmars-d
On Saturday, 11 November 2017 at 21:47:53 UTC, Dmitry Olshansky 
wrote:

On Saturday, 11 November 2017 at 20:37:59 UTC, Ola Fosheim


That's a library


So what? Should we say that c doesn’t support threads because 
they are implemented in the library.


Regular C is not a concurrent language.
D is not an actor based language.

Has nothing to do with library features.



and it does not have much to do with actors, i.e. it does not 
ensure that every actor is an independent entity.


What’s not independent about thread? How it doesn’t ensure that?


What is independent about a thread? A process is independent 
(mostly).

How can it ensure that?




Re: Actor model & D

2017-11-11 Thread Ola Fosheim Grostad via Digitalmars-d
On Saturday, 11 November 2017 at 18:30:33 UTC, Dmitry Olshansky 
wrote:
On Saturday, 11 November 2017 at 13:31:20 UTC, Ola Fosheim 
Grøstad wrote:

On Monday, 19 August 2013 at 03:11:00 UTC, Luís Marques wrote:
Can anyone please explain me what it means for the D language 
to follow the Actor model, as the relevant Wikipedia page 
says it does? [1]


[1] 
http://en.wikipedia.org/wiki/Actor_model#Later_Actor_programming_languages


The page is largely unverified, i.e. nobody cares that it is 
full of errors…


D does not follow the actor model in any way shape or form…


Wat? std.concurrency is message passing where an actor is 
either a Fiber or Thread.


That's a library and it does not have much to do with actors, 
i.e. it does not ensure that every actor is an independent entity.


Re: Webassembly?

2017-07-06 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 6 July 2017 at 18:26:18 UTC, Joakim wrote:
so you can seamlessly pass objects to javascript.  I believe 
people have written their own GCs that target webasm, so the D 
GC can likely be made to do the same.


You would have to emulate the stack...


Re: Isn't it about time for D3?

2017-06-15 Thread Ola Fosheim Grostad via Digitalmars-d

On Wednesday, 14 June 2017 at 22:01:38 UTC, bachmeier wrote:
It's a bigger problem for D than for those languages. If you 
introduce too many changes, the tools stop working, and we 
don't have the manpower to fix them. The same goes for 
libraries. A language with a larger group of developers, like 
Python, can be less conservative about breaking changes and not 
have it disrupt the ecosystem.


Yes, on the other hand, if you reach a stable version and stop 
adding features then you can focus on and reach maturity... Then 
leave the new features to the unstable version.


If you keep adding features to a stable branch then maturity is 
perpetually out of reach. I think Go gained a lot of ground from 
focusing on maturity over features.







Re: A Few thoughts on C, C++, and D

2017-05-30 Thread Ola Fosheim Grostad via Digitalmars-d

On Tuesday, 30 May 2017 at 15:06:19 UTC, Jacob Carlborg wrote:

On 2017-05-30 14:27, Ola Fosheim Grøstad wrote:


Maybe even turning some macros into functions?


DStep can do that today.


That's cool!  How robust is in practice on typical header files 
(i.e zlib and similar)?


What were the objections to integration with DMD?



Re: A Few thoughts on C, C++, and D

2017-05-29 Thread Ola Fosheim Grostad via Digitalmars-d

On Tuesday, 30 May 2017 at 01:46:02 UTC, bachmeier wrote:
I'm not necessarily disagreeing with RW's post. My reading is 
that the goal would be to get D into the enterprise, but maybe 
I misinterpreted. If D as a successor to Vala leads to more 
projects like Tilix, that's great.


I never quite understood the enterprise-focus either. What I like 
to see for a language is a difficult use scenario being 
maintainable. I sometimes browse large code bases just to see if 
a language leads to readable code.


writing better documentation for Dub, and so on. Incremental 
improvements lead to incremental adoption of D.


Yes, I think retention is the most important factor in the case 
of D. Identify and understand why polyglot programmers either 
stay with D or leave. Then give those areas the highest priority, 
especially exit-triggering issues.


Focusing on getting many libraries won't work, because you need 
to maintain them. I never use unmaintained libraries... Having 
many unmaintained libraries is in a way worse than having a few 
long-running ones that improve at a steady pace.


I'll also note that Vala didn't catch on, so being the 
successor to Vala by itself may not help D adoption.


Being perceived as the best for something helps. Vala was the 
best for something narrow. I think Rust is being perceived as the 
best for runtime-less programming with high level features (right 
or wrong) and Go is perceived as having a runtime for web 
services.


So I personally perceive Rust and Go in different sectors of the 
spectrum. I have more problems placing Nim, Haxe, D etc.




Re: Should out/ref parameters require the caller to specify out/ref like in C#?

2017-05-29 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 29 May 2017 at 05:39:41 UTC, Nick Sabalausky 
(Abscissa) wrote:
Did you intend that as a response to my post or to the OP? 
Sounds more like it was directed at the OP.


I tried to reply to:

<>


But failed... prolly because you had 2 msgs in a row... Sorry.


Re: Should out/ref parameters require the caller to specify out/ref like in C#?

2017-05-28 Thread Ola Fosheim Grostad via Digitalmars-d
On Monday, 29 May 2017 at 01:56:19 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 05/28/2017 03:06 PM, Meta wrote:


If you didn't know that the function takes its parameters by 
ref or out... You're should've RTFM.


That's the same reasoning that's been used to excuse just about 
every API blunder in C's infamously unsafe bug-riddled history.


This is information that a good IDE could be designed to provide. 
To require "ref" is rather pointless as it would make the feature 
redundant, just use a pointer and "&" instead and argue in favour 
of nonnullable static analysis...


Re: Safe code as an I/O requirement

2017-05-28 Thread Ola Fosheim Grostad via Digitalmars-d

On Sunday, 28 May 2017 at 16:58:53 UTC, aberba wrote:

https://lwn.net/Articles/708196/

From the look of things and feedbacks from several security 
analysts and system developers, [exposed] I/O needs to be 
memory safe.


GStreamer multimedia library developed in C has safety issues 
[see article]. What would its safety be if it was written in D 
(along with its plugins)?


It consists of many libraries. Audio-video decoders tend to be 
selected based on performance so no bounds checks. You can 
usually do it in a safe manner, but then you either need to adapt 
all the algorithms or prove correctness. Both alternatives are 
expensive. So really, sandboxing sounds like a more realistic 
alternative for an open source media player that aims to support 
all formats using third party codecs...




Re: Trip notes from Israel

2017-05-27 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Saturday, 27 May 2017 at 20:21:56 UTC, Meta wrote:
On Saturday, 27 May 2017 at 10:50:34 UTC, Ola Fosheim Grøstad 
wrote:
Don't mistake my intentions. I proposed removing `body` because 
not being able to use it as a symbol name is often complained 
about on the forums, because it is a small, manageable and 
understandable change, because it is a net (admittedly tiny) 
improvement to the language, and because I wanted to write a


Sure, there are many small improvements that could be made, but 
this is a lot of bureaucracy for the 15 minutes it takes to turn 
it into a contextual keyword...




Re: What would break if class was merged with struct

2017-05-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 27 May 2017 at 20:24:26 UTC, Moritz Maxeiner wrote:


Sure, and the definition requires it.

[1] https://dlang.org/spec/abi.html#delegates


Please note that an ABI is an implementation specific linkage 
detail, it cannot be portable so it does not define language 
semantics.




Re: What would break if class was merged with struct

2017-05-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 27 May 2017 at 20:24:26 UTC, Moritz Maxeiner wrote:
On Saturday, 27 May 2017 at 19:26:50 UTC, Ola Fosheim Grøstad 
wrote:
On Saturday, 27 May 2017 at 19:01:12 UTC, Moritz Maxeiner 
wrote:
Here, `bar`, takes a (pointer to a) class instance as 
parameter `foo`. `foo` is a single pointer, i.e. 8 bytes on a 
64bit OS, with *no* special semantics.


Does the language spec say anything about the size of class 
references?


Yes, is is defined as `ptrsize` and must have the exact same 
size as a pointer to a struct and - more importantly - a 
pointer to a stack frame[1].


Huh? You are talking about lambdas?

Didnt find anything on class references in that Intel specific 
ABI, which appears to be optional anyway.



Yes, see above link. Unless you make *all* stack frame pointers 
smart pointers (which makes no sense whatsoever), class 
instances cannot be smart pointers in the language as it is 
specified right now.


I don't understand. Why are frame pointers relevant for class 
references?





Sure, and the definition requires it.


Why?





Re: The syntax of sort and templates

2017-05-26 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Friday, 26 May 2017 at 15:49:06 UTC, Ali Çehreli wrote:
For example, Stroustrup has the article "Learning Standard C++ 
as a New Language"[1]. It compares sorting performance of C to 
C++ in section 3, "Efficiency". With those old C and C++ 
compilers he used (in May 1999), C++ was 1.74 to 4.62 times 
faster than C.


c++ language advocacy bullshit. C++ template bloat wouldn't be 
able to run on the low end machines from that era. The only thing 
from C stdlib you would use in performance code is memcpy and 
math.h, the rest is convinience and baggage...


If people have to use C stdlib as an example then they have 
already lost the argument...






Re: Warning, ABI breakage from 2.074 to 2.075

2017-05-26 Thread Ola Fosheim Grostad via Digitalmars-d

On Friday, 26 May 2017 at 13:23:20 UTC, Jason King wrote:
wanted to fix a problem with the underlying system.   Trying to 
build
something on top of an unstable ABI is building your 
foundations on sand.


All I’m saying is if no attention is going to be paid to this 
(it doesn’t mean you can’t change the ABI, but it needs to be 
managed it better than ‘whoops!’), just stop claiming the 
systems bit and stay up stack where this isn’t a problem.


There is some truth to this as BeOS used C++, and ABI was a 
concern, but it really depends on the context. D has a too big 
runtime and too many runtime dependent features to be classified 
as a low level language anyway, though...





Re: std.functional.memoize : thread local or __gshared memoization?

2017-05-25 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 25 May 2017 at 20:43:36 UTC, Jonathan M Davis wrote:
complication to the language. Certainly, from what I know of 
Rust, it's far more complicated because of that sort of thing, 
and glancing over that link on Pony, it looks like it's getting 
a fair bit of complication as well in order to deal with the 
problem.


I think Pony uses a GC (also to collect dead threads/actors).

But I have found that trying to understand their model to be a 
good exercise for thinking about where problems can arise and 
what it takes to resolve it through a typesystem.


stuff with regards to threads to shared, which is great, but 
dealing with the stuff that involves sharing across threads 
then requires that the programmer be much more careful than 
would be the case if the type system were actually helping you 
beyond preventing you from doing stuff that isn't thread safe 
to a shared object without casting it to thread-local first.


Yes, that transition to/from shared is problematic. There are 
ways to deal with it, proving concurrency patterns to be correct, 
but it takes even more machinery than Pony/Rust.


I don't know what the right answer is. Rust seems to get both 
praise and complaints about its approach - as do we for ours. 
But both approaches are safer than what you get with C/C++.


Depends on what you do in C++. If you only share though a 
ready-made framework, then you probably can do quite well in 
terms of safety. With a small performance cost.


If you want max performance in the general case then I think all 
these languages will have safety troubles. Because you need 
proper full-blown verification then...





Re: [OT] Algorithm question

2017-05-12 Thread Ola Fosheim Grostad via Digitalmars-d

On Friday, 12 May 2017 at 18:43:53 UTC, H. S. Teoh wrote:
I'm surprised there are no (known) incremental algorithms for 
generating

a random permutation of 0..n that requires less than O(n) space.


I've told you all you need to know...



Re: "Competitive Advantage with D" is one of the keynotes at C++Now 2017

2017-04-28 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Friday, 28 April 2017 at 22:11:30 UTC, H. S. Teoh wrote:
On Fri, Apr 28, 2017 at 05:11:29PM -0400, Nick Sabalausky 
(Abscissa) via Digitalmars-d-announce wrote:

On 04/28/2017 04:26 PM, Atila Neves wrote:
> The other day I was reminded that in C++ land one has to 
> manually write `operator<<` to print things out and 
> `operator==` to compare things.


Not to mention you have to overload operator<, operator!=, 
operator==, operator>, operator<=, *and* operator>= in order to 
get the right results in all cases.


In D, you have to overload opEquals and opCmp.  Hmm, I wonder 
why I enjoy programming in D more than C++...


Comparison is better in C++. This is a weak spot in D. You could 
do the same in C++ as D if you wanted to. You can detect the 
presence of operator< in overload templates, but being explicit 
is not much work and more flexible. Just cut'n'paste the set you 
want...


Re: Garbage Collector?

2017-04-28 Thread Ola Fosheim Grostad via Digitalmars-d

On Friday, 28 April 2017 at 21:21:13 UTC, jmh530 wrote:
To be fair, C++ effectively has multiple pointer types too with 
raw pointers, unique_ptr, shared_ptr, and weak_ptr. However, 
each of the extra ones has a unique purpose and are opt-in. As 
a result, people happily use them when it makes their lives 
easier.


Yes, they are not language types though, so no special effect on 
the compiler or runtime. The language types are pointers, 
 and &


By contrast, C++/CLI (I'm more familiar with that than managed 
C++) has pointer to managed heap and pointer to unmanaged heap. 
The concepts overlap more.


Yes, and I assume those are language types so that the compiler 
and runtime can take advantage of it?




Re: Garbage Collector?

2017-04-27 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 27 April 2017 at 22:43:56 UTC, Moritz Maxeiner wrote:

Working on the memory chunk layer is memory management.
Working on the object layer is object lifetime management.
D offers you both automatic memory management and automatic 
lifetime management via its GC.


D offers sound automatic lifetime management? Since when?



Re: Duplicated functions not reported?

2017-04-16 Thread Ola Fosheim Grostad via Digitalmars-d-learn

On Sunday, 16 April 2017 at 15:54:16 UTC, Stefan Koch wrote:

sorting has O(n^2) worst case complexity.
Therefore totaling to O(n^2) worst case again.


Sorting with comparison is solved in O(n log n). If you have an 
upper limit on signature length then the problem is solvable for 
the whole program in O(n).


Not that this says an awful lot...



Re: Rename 'D' to 'D++'

2017-03-10 Thread Ola Fosheim Grostad via Digitalmars-d

On Friday, 10 March 2017 at 23:00:16 UTC, XavierAP wrote:
IMHO... Only from a typical C++ centric perspective can it be 
claimed that C++11 and higher have not copied (not from D which 
was most of the time not first).


Neither C++ or D have any significant original features.

the first. And everything can be called "syntactic sugar" over 
assembly, nay machine code.


This isn't right though. Modern C++ has added some semantic 
additions and adjustments to enable new patterns (or stricter 
typing).


And yes often D has implemented them first, which can only be 
blamed on C++ itself. C++ was designed to be


Not sure what you mean. Features are proposed decades before they 
get standardized and gets implemented as experimental features as 
well, often years before. In general a standardization process 
expects multiple independent implementations to exist before 
acceptance...


time it could be kicked only with the approval of an ISO 
committee.


Not really, there are multiple non standard features in all the 
C++ compilers and people use them. Each of those compilers are 
more widespread than D, so if you want a fair conparison you'd 
have to compare the dialects and not an ISO standard (which 
always will be a shared subset of the implementations)





Re: Clarification on D.

2017-03-09 Thread Ola Fosheim Grostad via Digitalmars-d

On Thursday, 9 March 2017 at 14:38:32 UTC, Guillaume Piolat wrote:
On Thursday, 9 March 2017 at 14:08:00 UTC, Ola Fosheim Grøstad 
wrote:


I don't really want to talk with you.


Whatever suits you, but don't pretend that people that express 
views about D online are the competition. They are overwhelmingly 
people that have used it or are using it.


So whatever they like or dislike about it is rooted in their 
experience. I.e. It is real.





Re: Google is apparently now better at searching programming-related questions

2017-03-03 Thread Ola Fosheim Grostad via Digitalmars-d
On Friday, 3 March 2017 at 18:28:50 UTC, Nick Sabalausky 
(Abscissa) wrote:
startpage.com is another way to get clean (or at least 
clean-ish) results. Although, it's conceivable (probable?) it's 
really giving out results based on a "user" that's really an 
aggregate of startpage.com's users.


I'm getting completly different results from startpage.com too... 
But I assume Google also have geographical bias... Too much AI...




Re: If you needed any more evidence that memory safety is the future...

2017-02-25 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 25 February 2017 at 22:37:15 UTC, Chris Wright wrote:
The undefined behavior is what happens after the would-be 
assertion failure occurs. The compiler is free to emit code as 
if the assertion passed, or if there is no way for the 
assertion to pass, it is free to do anything it wants.


No. That would be implementation defined behaviour. Undefined 
behaviour means the whole program is illegal, i.e. not covered by 
the language at all.





Re: If you needed any more evidence that memory safety is the future...

2017-02-25 Thread Ola Fosheim Grostad via Digitalmars-d

On Saturday, 25 February 2017 at 21:49:43 UTC, Chris Wright wrote:

On Sat, 25 Feb 2017 22:12:13 +0100, Timon Gehr wrote:


On 25.02.2017 15:38, Chris Wright wrote:

On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
If 'disable' (as can be reasonably expected) means the 
compiler will behave as if they were never present, then it 
does not.


https://dlang.org/dmd-linux.html#switch-release


This literally says "[...] assertion failures are undefined 
behaviour".


...

It says it doesn't emit code for assertions.

Then it says assertion failures are undefined behavior.

How does that even work?


LLVM and other optimizers provide functionality for introducing 
axioms directly. D allows compilers to turn asserts into axioms 
without proof. If axioms are contradicting each other the whole 
program becomes potentially undefined (i.e. True and False become 
arbitrary).





  1   2   >