Re: Printing shortest decimal form of floating point number with Mir

2021-01-05 Thread welkam via Digitalmars-d-announce
On Tuesday, 5 January 2021 at 21:46:34 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 5 January 2021 at 21:43:09 UTC, welkam wrote:

Replace alias Bar(T) = Foo!T; with alias Bar = Foo;

struct Foo(T) {}

alias Bar = Foo;

void f(T)(Bar!T x) {}

void main() {
auto foo = Bar!int();
f(foo);
}


The example was a reduced case. One can trivially construct 
examples where that won't work.


It is very useful to create a simple alias from a complex type 
for export from a type library, then it breaks when people use 
that type library to write templated functions.


People do this all the time in C++.


I reread the whole thread. You want something like this except 
without mixins and to actually work.


struct Foo(T) {}

mixin template Bar(T) {
 Foo!T
}

void f(T)(Foo!T x) {}

void main() {
f(mixin Bar!(int)());
}

In languages you can pass things by value, by reference or by 
name. Alias works by the name passing mechanic. You can not use 
alias or capture by name here because you cant get a name of 
something that doesn't exist yet.


What you want is parametric generation of definition and 
insertion in place. Now that's something Walter can understand.




Re: Printing shortest decimal form of floating point number with Mir

2021-01-05 Thread welkam via Digitalmars-d-announce
On Wednesday, 23 December 2020 at 22:13:09 UTC, Ola Fosheim 
Grøstad wrote:
The big picture that the DIP suggested was that when stuff like 
this fails to compile:


  struct Foo(T) {}

  alias Bar(T) = Foo!T;

  void f(T)(Bar!T x) {}

  void main() {
auto foo = Bar!int();
f(foo);
  }

Then most programmers would just conclude that the compiler is 
broken beyond repair and move on to another language.


Replace alias Bar(T) = Foo!T; with alias Bar = Foo;

struct Foo(T) {}

alias Bar = Foo;

void f(T)(Bar!T x) {}

void main() {
auto foo = Bar!int();
f(foo);
}


Re: Printing shortest decimal form of floating point number with Mir

2021-01-05 Thread welkam via Digitalmars-d-announce
On Tuesday, 5 January 2021 at 15:10:29 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 5 January 2021 at 15:04:34 UTC, welkam wrote:

Also how "i'm like you" is an insult?


I don't think I should reply to this…


Then dont replay to this sentence. My post had more than one 
sentence.


Also the thing where I showed that what you said for days is not 
possible and is a bug is actually possible if you followed D's 
syntax and semantics. I guess it had nothing to do with your loss 
of motivation to discuss this topic further.


This code compiles

struct bar(T) {}
void f(T)(bar!T x) {}

void main()
{
alias fooInt = bar!int;
alias foo = bar;

assert(is(fooInt  == bar!int));
assert(is(foo!int == bar!int));
assert(is(fooInt  == foo!int));
}


Re: Printing shortest decimal form of floating point number with Mir

2021-01-05 Thread welkam via Digitalmars-d-announce
On Monday, 4 January 2021 at 22:55:28 UTC, Ola Fosheim Grøstad 
wrote:

It is a name, e.g.:

alias BarInt = Bar!int;


"BarInt", "Bar!int" and "Foo!int" are all names, or labels, if 
you wish. And they all refer to the same object: the nominal 
type. Which you can test easily by using "is(BarInt==Foo!int)".


Bar!int is not a name. It's declaration. Bar is the name.
https://github.com/dlang/dmd/blob/master/src/dmd/dtemplate.d#L5754

Labels are myLabel: and are used with goto, break, continue.

Objects are class instances.

On Monday, 4 January 2021 at 23:08:31 UTC, Ola Fosheim Grøstad 
wrote:

If the terminology is difficult, <...>


The main problem here is that you use words/definitions 
interchangeably that refer to similar concepts but are different. 
As if they are the same. They are not! I got the concept from the 
first post and I believe most here got it too. What we have 
trouble here is getting it from abstract realm to reality. And 
reality demands it to be specific.


 and type "struct _ {}" <...> should be 
interchangeable with no semantic impact.


What you want is to reference template thingy by name and assign 
it to thing so when you use the thing it should be 
interchangeable with template thingy and should be semantically 
equivalent. So what is a thing? From your posts its either/or:

1. identifier
2. declaration.

And from your posts the template thingy is either/or:
1. template instantiation
2. template definition

Since you cant get specific we will talk about all 4 
possibilities. For a template declaration of myStruct lets see 
how we can go about creating an alias to it.


struct myStruct(T) {}

For assigning template instantiation to a identifier(1,1) its 
simple.

alias myS11 = myStruct!int;

For assigning template definition to a identifier(1,2) you write 
like this

alias myS12 = myStruct;

For assigning template instantiation to a declaration(2,1) well 
you cant. The language does not permit that.


For assigning template definition to a declaration(2,2). You cant 
do that too.


And in practice it looks like this
[code]
struct myStruct(T) {}
void f(T)(myStruct!T x) {}

void main()
{
alias myS11 = myStruct!int;
alias myS12 = myStruct;

myStruct!int x;
f(x);

myS11 x11;
f(x11);

myS12!int x12;
f(x12); 
}
[/code]

Now there are also function templates but I will leave it as 
homework assignment. So the million dollar question is. What do 
you want to do with alias assignment to declaration that you cant 
do by assignment to identifier?




Drop ad hominem. Argue the case.


Ad hominem is when you use insult INSTEAD of argument. A detail 
that most people miss. Also how "i'm like you" is an insult?




Re: Printing shortest decimal form of floating point number with Mir

2021-01-04 Thread welkam via Digitalmars-d-announce
On Sunday, 3 January 2021 at 22:50:16 UTC, Ola Fosheim Grøstad 
wrote:

YOU DO HAVE TO ACKNOWLEDGE A TYPE SYSTEM BUG!

If an indirection through an alias causes type unification to 
fail then that is a serious type system failure. No excuses 
please...


Different people have different definitions of words. It's clear 
that your definition of bug does not match other people 
definition so instead of forcing other people to conform to your 
definition it would be beneficial if you could express your ideas 
using other words.


Secondly lets talk about

alias Bar!int = Foo!int;

or is it

alias Bar(T) = Foo!T;

Whatever. You want to alias template and assign a new name and 
then use the new name. Because you were not able to do that you 
say its obvious type system bug. I mean "You should be able to 
use whatever names you fancy without that having semantic 
implications". I guess type checking occurs during semantic 
analysis so its connected.


Anyway you want assign template name. Spoiler alert Bar!int is 
not a name. It's also not a type or even an object. You might 
used another term for how alias should work but I cant track them 
all. Its template instantiation.


Instead of
alias Bar!int = Foo!int;

use

alias Bar = Foo;
//or
alias Bar = foo!int;

for more read [1]

What you tried before was an attempt to assign template 
instantiation to another template instantiation or function call. 
If you want to assign name to name then write it that way.


When I got into personality types and typed myself I found out 
that my type doesnt respect physical world and details. And its 
true. I struggle with who where when. I some times get out of 
home without clipping my nails because I forgot to clip them. And 
I forgot because I was lost in my thoughts. Analyzing patterns is 
more important than being physically in the world. But what you 
displayed here is criminal neglect of details. There is 
difference between types, objects, names and symbols. There is 
difference between template declaration and initialization. There 
are differences between type system and language semantics. If 
you wont pay attention to these details ofcourse you will have 
problems communicating with people. And you failure to 
effectively communicate to others is not indication that its bad 
for D's future.


People say that you notice in others what you dont like in 
yourself.

1. https://dlang.org/spec/template.html#aliasparameters


Re: Printing shortest decimal form of floating point number with Mir

2021-01-04 Thread welkam via Digitalmars-d-announce

On Monday, 4 January 2021 at 01:19:12 UTC, jmh530 wrote:

it makes things overly complicated
Just because a feature makes something simpler is not enough of 
an argument of why it should be added. Case and point C, lua and 
Go languages. They are popular in part because they are simple. 
That's why I said that no good arguments came after. This is not 
a good argument. A good argument would be one that shows that 
benefit is worth increased complexity.



If you aren't writing templates, then it wouldn't affect you.
I know how to write simple template. I just dont use them 
recursively. I find it hard to reason about. Also I'm concerned 
about compilation speed and resulting code bloat. Maybe I'm just 
not experienced enough


However, it was deemed beneficial enough that a form of it was 
added to C++ as part of C++ 11

Well D already support that just with different syntax.

struct too_long_name(T) {}
alias bar = too_long_name;
bar!(int);

I wonder if the inability to do this would inhibit the ability 
of D code to interact with C++ code bases.


The way interop between D and C++ works is that you need to match 
function signatures and struct/class data layout on binary level. 
Then you just link. There is no cross language template 
instantiation and there will never be one. So it will not affect 
interop.


P.s. Thank you for a well written post with a link to useful read.




Re: Printing shortest decimal form of floating point number with Mir

2021-01-03 Thread welkam via Digitalmars-d-announce

On Sunday, 3 January 2021 at 06:35:23 UTC, 9il wrote:
On Tuesday, 29 December 2020 at 19:59:56 UTC, Ola Fosheim 
Grøstad wrote:
On Tuesday, 29 December 2020 at 16:14:59 UTC, Atila Neves 
wrote:

On Thursday, 24 December 2020 at 14:14:33 UTC, 9il wrote:

On Thursday, 24 December 2020 at 14:08:32 UTC, welkam wrote:

On Wednesday, 23 December 2020 at 18:05:40 UTC, 9il wrote:

It was a mockery executed by Atila

Read the all comments and didnt saw any mockery


Yes, it wasn't explicit. He didn't write bad words, he did a 
bad decision. Bad for D.


I apologise if what I wrote came across as mockery; it 
certainly wasn't intended that way.


How would you have liked for me to have handled it better?


I am not speaking for Ilya, but from skimming through the 
dialogue it struck me that you didn't respond from the 
perspective of managing the process, but from a pure engineer 
mindset of providing alternatives.


It would've been better if you started by 1. understanding the 
issue 2. acknowledging that the type system has an obvious bug 
3. looking at the issue from the perspective of the person 
bringing attention to the issue. I don't think anyone was 
looking for workarounds, but looking for


1. acknowledgment of the issue
2. acknowledgment of what the issue leads to in terms of 
inconvenience

3. a forward looking vision for future improvements


+1


I don't want to be the guy that comes and just takes a missive 
dump in the middle of the room but I feel like I have to.


The whole language change process is not a place to get tribe`s 
validation or get emotional support for your boo boos. Its like 
looking for a virgin partner in a brothel. Makes no sense. You 
should view it more like this

https://images.theconversation.com/files/31778/original/zhrxbdsm-1379916057.jpg?ixlib=rb-1.1.0=45=format=926=clip

The way I saw it the whole argumentation for a language change 
went like this:

9il: This would be helpful for my lib
Atila: Im not convinced this is good addition to the language

Thats it. No more good arguments came later. If proposal has only 
this kind of argument then ofcourse it will be rejected. Even if 
the idea is good.


You should put yourself in the boots of Atila. If you accept a 
change that later turns out to be bad idea you cant just take it 
out. We all would have to be with it for 10 or more years. So to 
avoid situation that I described a proposal needs to have solid 
argumentation and cost/benefit ratio needs to be clear to make 
good decision.


And for the end I want to point out that your proposal is not in 
the same category as ast macros. If you or some one else comes up 
with solid arguments then the outcome might be different. As for 
me. Im do not know ins and outs of templates to make a judgment 
if your proposal is good or not. No one showed how it would 
benefit the code I write.


Re: Printing shortest decimal form of floating point number with Mir

2020-12-24 Thread welkam via Digitalmars-d-announce

On Wednesday, 23 December 2020 at 18:05:40 UTC, 9il wrote:

It was a mockery executed by Atila

Read the all comments and didnt saw any mockery


Re: Chimpfella - new library to do benchmarking with ranges (even with templates!)

2020-12-20 Thread welkam via Digitalmars-d-announce

On Saturday, 19 December 2020 at 05:08:56 UTC, Max Haughton wrote:

This will soon support Linux's perf_event so you will be able 
to measure cache misses (and all the other thousands of pmc's 
intel expose), use LBR msrs etc.
Are you going to read stdout from calling perf or are you going 
to read perf.data file?


Re: Talk by Herb Sutter: Bridge to NewThingia

2020-06-29 Thread welkam via Digitalmars-d-announce

On Sunday, 28 June 2020 at 21:00:09 UTC, Dibyendu Majumdar wrote:

To be honest the analysis doesn't quite stack up. Because 
compatibility is not the reason for the success of Go, or Rust.


I would say the success of a language depends on many factors:


Think of a reasons of why people are popular. Make a mental list 
of that. Now try to explain why Kardashians are still popular.


Not everything in this life happens because of some good 
properties. At this moment Kardashians are popular because they 
are popular.


Re: Codefence, an embeddable interactive code editor, has added D support.

2020-05-23 Thread welkam via Digitalmars-d-announce

On Saturday, 23 May 2020 at 15:04:35 UTC, Paulo Pinto wrote:

Hi everyone,

as the subject states, you can find it here, 
https://codefence.io/


The current version is 2.092.0 with dmd.

Regards,


Why such thing is free? Who pays for the servers?


Re: [OT] What do you guys think of dark comedy channel on IT sh.. stuff?

2020-05-19 Thread welkam via Digitalmars-d-announce

On Tuesday, 19 May 2020 at 09:54:38 UTC, Iain Buclaw wrote:
Walter raised a pull request to merge the and and and or or AST 
nodes into a logical logical operator, and the initial rebuttal 
was that he's used log log for and and or or or operators for a 
very (very) long time.


Now this is gold. For some reason it reminds me of this
The Missile Knows Where It Is...
https://www.youtube.com/watch?v=bZe5J8SVCYQ


Re: Pijamas, a simple fluent assertation library (forked from Pyjamas)

2020-05-15 Thread welkam via Digitalmars-d-announce

On Friday, 15 May 2020 at 14:42:47 UTC, Dmitry Olshansky wrote:

compiler usually explodes trying to swallow it


https://media.giphy.com/media/tfxgAK370HzEY/giphy.gif


Re: "Programming in D" on Educative.io

2020-05-15 Thread welkam via Digitalmars-d-announce

On Thursday, 14 May 2020 at 08:42:43 UTC, ShadoLight wrote:

On Wednesday, 13 May 2020 at 19:25:43 UTC, welkam wrote:

On Thursday, 7 May 2020 at 09:18:04 UTC, Ali Çehreli wrote:

Because D is a re-engineering of C++


I thought it was re-engineering of C


This opinion seems quite common in the D community, but I 
frankly don't see it. If you are referring to the D subset 
defined by the BetterC switch, well, maybe then I would agree. 
But not for D in general.


At first this language was called Mars and it was simple. It was 
one man`s project. Walter fixed the flaws he saw in C but made 
sure that porting C to Mars was easy - copied code either 
compiled or threw an error.


Then Andrei came and he put all that metaprogramming, generics, 
introspection and more on top of the base that Walter built.


I dont think you can call D as re engineering of C++ when it was 
one person project. But historical accuracy is not why I raised 
that question. I remember there was a post by a C++ programmer 
that came to this mailing list saying that a year ago he  tried D 
because he was told that its similar to C++ but without all the 
cruft or something like that. I dont remember exactly. Because D 
does not behave like C++ that programmer didnt like the language. 
One year later he tried D again but this time he came to D from 
the point of it being like C but with its flaws fixed and stuff 
added to that core. Then he liked the language. The language 
didnt change but his enjoyment changed when he changed his 
expectations.


Re: "Programming in D" on Educative.io

2020-05-13 Thread welkam via Digitalmars-d-announce

On Thursday, 7 May 2020 at 09:18:04 UTC, Ali Çehreli wrote:

Because D is a re-engineering of C++


I thought it was re-engineering of C


Re: $750 Bounty: Issue 16416 - Phobos std.uni out of date (should be updated to latest Unicode standard)

2020-05-04 Thread welkam via Digitalmars-d-announce

On Monday, 4 May 2020 at 17:30:41 UTC, Arine wrote:

On Monday, 4 May 2020 at 17:01:01 UTC, Robert M. Münch wrote:
Besides getting the work done, there is one constraint: The 
work needs to get into Phobos. It doesn't make sense to have 
it sit around, because it's not being merged. I don't have any 
clue who is in charge, who decides this. Or if there need to 
be some conditions full-filled so that the result gets merged.


I feel like this is going to be the biggest obstacle.


If changes to phobos do not bring breaking changes then I dont 
see how update to std.uni might not be merged


Re: Hunt Framework 3.0.0 Released, Web Framework for DLang!

2020-05-01 Thread welkam via Digitalmars-d-announce

On Friday, 1 May 2020 at 16:32:27 UTC, Heromyth wrote:

On Friday, 1 May 2020 at 13:11:23 UTC, welkam wrote:

On Friday, 1 May 2020 at 10:54:55 UTC, zoujiaqing wrote:

<...>
I did a quick look and it looks like http server + some 
goodies. Is this a correct assessment? If yes what is the 
status of http 2.0, ssl and bzip support?


Not exactly. The whole Hunt Framework includes many other 
stuffs like database, redis, amqp etc. except for HttpServer 
and HttpClient.


Of course, the HTTP 2.0 and TLS are supported in Hunt 
Framework. Here are some simple demos:

https://github.com/huntlabs/hunt-http/tree/master/examples/H2C-Demo
https://github.com/huntlabs/hunt-http/tree/master/examples/HttpDemo


Took a look at https://github.com/huntlabs and it seems that 
these guys have everything in place to make web facing 
applications. Thats a lot of work. I should take it out for a 
spin to see how it "handles"


Re: Hunt Framework 3.0.0 Released, Web Framework for DLang!

2020-05-01 Thread welkam via Digitalmars-d-announce

On Friday, 1 May 2020 at 10:54:55 UTC, zoujiaqing wrote:

<...>
I did a quick look and it looks like http server + some goodies. 
Is this a correct assessment? If yes what is the status of http 
2.0, ssl and bzip support?


Re: DConf 2020 Canceled

2020-03-15 Thread welkam via Digitalmars-d-announce

On Sunday, 8 March 2020 at 03:56:35 UTC, Era Scarecrow wrote:
 From what i've researched, it's more or less the flu... a 
somewhat more contagious, over-hyped, genetically modified, 
potentially respiratory infection cold/flu; And likely a tool 
by government(s) to force unwanted policies down our throats 
like Martial Law, restriction of travel, Mandatory Vaccines 
and/or micro-chipping. As well as the government had it since 
2015 in certain labs thus more than likely there's already a 
vaccine.


 Lots of details on the matter. Unfortunate for DConf to be 
cancelled. But whatever is considered safest and best for 
everyone involved.


Fcuking ExxPs man...

https://www.youtube.com/watch?v=ATgZ_0U0RAk
(ExxP: Don't control me bro!)
https://www.youtube.com/watch?v=Ha5gYfGKUZI


Re: utiliD: A library with absolutely no dependencies for bare-metal programming and bootstrapping other D libraries

2019-05-20 Thread welkam via Digitalmars-d-announce

On Friday, 10 May 2019 at 23:51:56 UTC, H. S. Teoh wrote:
Libc implementations of fundamental operations, esp. memcpy, 
are usually optimized to next > week and back for the target 
architecture, taking advantage of the target arch's quirks to > 
maximize performance

Yeah about that...
Level1 Diagnostic: Fixing our Memcpy Troubles (for Looking Glass)
https://www.youtube.com/watch?v=idauoNVwWYE


Re: My Meeting C++ Keynote video is now available

2019-01-15 Thread welkam via Digitalmars-d-announce

On Tuesday, 15 January 2019 at 11:59:58 UTC, Atila Neves wrote:
He's not saying "kill classes in D", he's saying an OOP system 
in D could be implemented from primitives and classes don't 
need to be a language feature, similar to CLOS in Common Lisp.


For some people writing OOP means writing keyword class.


Re: DLP identify leaf functions

2018-12-11 Thread welkam via Digitalmars-d-announce

On Tuesday, 4 December 2018 at 11:02:11 UTC, Jacob Carlborg wrote:

On 2018-12-02 17:57, welkam wrote:

What a timing. I am working on (slowly) on a tool that would 
get all struct and class declarations and for each of them get 
functions in which they are used. Then combine them with 
profiling data to find data structures that are hot and how 
changing them affects performance. The only code that is 
written is partially parsing perf.data file and the rest is 
missing. It would be wonderful if your tool could emit such 
functions then my job would be easy. Parsing would have been 
done if perf.data format was fully documented.


Something like "find all references"?


Late response is better than none i guess. References are bad 
word because it could mean references as passed to functions or 
pointers. What I am looking is occurrences in functions. Whether 
class/struct is created or passed trough pointer I want to know 
about it. If class/struct is put into container I want to track 
that as well. I want to know all uses of that data type.


One frequent thing people say on reddit is that phobos is based 
on GC. It would be nice if there was a tool that could report 
what percentage of functions actually use GC and what are marked 
as @nogc


Re: D compilation is too slow and I am forking the compiler

2018-12-11 Thread welkam via Digitalmars-d-announce

On Friday, 23 November 2018 at 16:21:53 UTC, welkam wrote:
If you want to read data from that bool CPU needs to fetch 8 
bytes of data(cache line of 64 bits). What this means is that 
for one bit of information CPU fetches 64 bits of data 
resulting in 1/64 = 0.015625 or ~1.6 % signal to noise ratio. 
This is terrible!


Cache line of 64 bits. 64 BITS. This forum is full of 
knowledgeable people and they should have spoted this mistake or 
they didnt read it. Cache lines on most processors are 64 bytes. 
Now I know why it felt weird when I wrote that post. So the real 
math for when you read one bit in cache line is 1/(64*8) = 
0.001953125 or ~ 0.2% signal to noise ratio


Re: DLP identify leaf functions

2018-12-02 Thread welkam via Digitalmars-d-announce

On Friday, 30 November 2018 at 20:10:05 UTC, Jacob Carlborg wrote:
I would like to announce a new project I've started, called DLP 
(D Language Processing). Currently it's quite experimental but 
the idea is that it would contain a collection of commands for 
inspecting D code in various ways. It uses the DMD frontend as 
a library (the Dub package) to process D code.


What a timing. I am working on (slowly) on a tool that would get 
all struct and class declarations and for each of them get 
functions in which they are used. Then combine them with 
profiling data to find data structures that are hot and how 
changing them affects performance. The only code that is written 
is partially parsing perf.data file and the rest is missing. It 
would be wonderful if your tool could emit such functions then my 
job would be easy. Parsing would have been done if perf.data 
format was fully documented.





Re: D compilation is too slow and I am forking the compiler

2018-11-23 Thread welkam via Digitalmars-d-announce

On Friday, 23 November 2018 at 19:21:03 UTC, Walter Bright wrote:

On 11/23/2018 5:23 AM, welkam wrote:
Currently D reads the all files that are passed in command 
line before starting lexing/parsing, but in principle we could 
start lexing/parsing after first file is read. In fact we 
could start after first file`s first line is read.


DMD used to do that. But it was removed because:

1. nobody understood the logic

2. it didn't seem to make a difference

You can still see the vestiges by the:

static if (ASYNCREAD)

blocks in the code.


I didnt expect huge wins. This would be useful when you start 
your computer and files have to be read from old spinning rust 
and the project has many files. Otherwise files will be cached 
and memcopy is fast. I was surprised on how fast modern computers 
copy data from one place to another.


Speaking of memcpy here is a video you might like. It has memcpy, 
assembler and a bit of compiler. Its very easy watch for when you 
want to relax.

Level1 Diagnostic: Fixing our Memcpy Troubles (for Looking Glass)
https://www.youtube.com/watch?v=idauoNVwWYE


Re: D compilation is too slow and I am forking the compiler

2018-11-23 Thread welkam via Digitalmars-d-announce
On Friday, 23 November 2018 at 14:32:39 UTC, Vladimir Panteleev 
wrote:

On Friday, 23 November 2018 at 13:23:22 UTC, welkam wrote:
If we run these steps in different thread on the same core 
with SMT we could better use core`s resources. Reading file 
with kernel, decoding UTF-8 with vector instructions and 
lexing/parsing with scalar operations while all communication 
is done trough L1 and L2 cache.


You might save some pages from the data cache, but by doing 
more work at once, the code might stop fitting in the 
execution-related caches (code pages, microcode, branch 
prediction) instead.


Its not about saving tlb pages or fitting better in cache. 
Compilers are considered streaming applications - they dont 
utilize cpu caches effectively. You cant read one character and 
emit machine code then read next character you have to go over 
all data multiple times while you modify it. I can find white 
papers if you interested where people test GCC with different 
cache architectures and it doesnt make much of a difference. GCC 
is popular application when testing caches.


Here are profiling data from DMD
 Performance counter stats for 'dmd -c main.d':

600.77 msec task-clock:u  #0.803 CPUs 
utilized

 0  context-switches:u#0.000 K/sec
 0  cpu-migrations:u  #0.000 K/sec
33,209  page-faults:u # 55348.333 
M/sec
 1,072,289,307  cycles:u  # 1787148.845 
GHz
   870,175,210  stalled-cycles-frontend:u #   81.15% 
frontend cycles idle
   721,897,927  stalled-cycles-backend:u  #   67.32% 
backend cycles idle
   881,895,208  instructions:u#0.82  insn 
per cycle
  #0.99  
stalled cycles per insn
   171,211,752  branches:u# 285352920.000 
M/sec
11,287,327  branch-misses:u   #6.59% of 
all branches


   0.747720395 seconds time elapsed

   0.497698000 seconds user
   0.104165000 seconds sys

Most important data in this conversation is 0.82  insn per cycle. 
My CPU could do ~2 IPC so there are plenty of CPU resources 
available. New Intel desktop processors are designed to do 4 
insn/cycle. What is limiting DMD performance is slow RAM, data 
fetching and not what you listed.

code pages - you mean TLB here?

microcode cache. Not all processors have it and those who have 
only benefit trivial loops. DMD have complex loops.


branch prediction. More entries in branch predictor wont help 
here because branches are missed because data is unpredictable 
not because there are too many branches. Also branch 
missprediction penalty is around 30 cycles while reading from RAM 
could be over 200 cycles.


L1 code cache. You didnt mention this but running those tasks in 
SMT mode might trash L1$ so execution might not be optimal.


Instead of parallel reading of imports DMD needs more data 
oriented data structures instead of old OOP inspired data 
structures. Ill give you example why its the case.


Consider
struct {
bool isAlive;

}

If you want to read data from that bool CPU needs to fetch 8 
bytes of data(cache line of 64 bits). What this means is that for 
one bit of information CPU fetches 64 bits of data resulting in 
1/64 = 0.015625 or ~1.6 % signal to noise ratio. This is terrible!


AFAIK DMD doesnt make this kind of mistake but its full of large 
structs and classes that are not efficient to read.  To fix this 
we need to split those large data structures into smaller ones 
that only contain what is needed for particular algorithm. I 
predict 2x speed improvement if we transform all data structures 
in DMD. Thats improvement without improving algorithms only 
changing data structures. This getting too longs so i will stop 
right now


Re: D compilation is too slow and I am forking the compiler

2018-11-23 Thread welkam via Digitalmars-d-announce
On Thursday, 22 November 2018 at 04:48:09 UTC, Vladimir Panteleev 
wrote:


Sorry about that. I'll have to think of two titles next time, 
one for the D community and one for everyone else.


If it's of any consolation, the top comments in both discussion 
threads point out that the title is inaccurate on purpose.


Your post on reddit received more comments than D front ends 
inclusion to GCC. If you titled your post differently you 
probably wouldn't had such success so from my perspective its a 
net positive. Sure there are few people that took the wrong 
message but there are more people who saw your post


Re: D compilation is too slow and I am forking the compiler

2018-11-23 Thread welkam via Digitalmars-d-announce
On Wednesday, 21 November 2018 at 10:56:02 UTC, Walter Bright 
wrote:
Wouldn't it be awesome to have the lexing/parsing of the 
imports all done in parallel?


From my testing lexing/parsing takes small amount of build time 
so running it in parallel might be small gain. We should consider 
running in parallel more heavy hitting features like CTFE and 
templates.


Since we are in wish land here is my wishes. Currently D reads 
the all files that are passed in command line before starting 
lexing/parsing, but in principle we could start lexing/parsing 
after first file is read. In fact we could start after first 
file`s first line is read. Out of all operation before semantic 
pass, reading from hard disk should be the slowest so it might be 
possible to decode utf-8, lex and parse at the speed of reading 
from hard disk. If we run these steps in different thread on the 
same core with SMT we could better use core`s resources. Reading 
file with kernel, decoding UTF-8 with vector instructions and 
lexing/parsing with scalar operations while all communication is 
done trough L1 and L2 cache.


I thought about using memory mapped files to unblock file reading 
as a first step but lack of good documentation about mmf and lack 
of thorough understanding of front end made me postpone this 
modification. Its a change with little benefit.


The main difficulty in getting that to work is dealing with the 
shared string table.


At begging of parsing a thread could get a read-only shared slice 
of string table. All strings not in table are put in local string 
table. After parsing tables are merged and shared slice is 
updated so new thread could start with bigger table. this assumes 
that table is not sorted


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-17 Thread welkam via Digitalmars-d-announce
On Saturday, 17 November 2018 at 12:36:30 UTC, rikki cattermole 
wrote:

Just in case, did you disable your anti-virus first?
Had to edit registry to turn that thing off and it still some 
times popup. Microsoft is so annoying.

https://imgur.com/a/x3cKjZm



Re: Profiling DMD's Compilation Time with dmdprof

2018-11-17 Thread welkam via Digitalmars-d-announce
I just updated DMD on my windows and tried to compile hello 
world. It took 7.4 sec. Something is not right. Can anyone 
reproduce? On the same mashine running linux it compiles and runs 
in 0.1 sec


PS C:\Users\Welkam\Desktop\Projects> Measure-Command {dmd -run 
main.d}



Days  : 0
Hours : 0
Minutes   : 0
Seconds   : 7
Milliseconds  : 418
Ticks : 74184371
TotalDays : 8.58615405092593E-05
TotalHours: 0.0020606769722
TotalMinutes  : 0.12364061833
TotalSeconds  : 7.4184371
TotalMilliseconds : 7418.4371







Re: Backend nearly entirely converted to D

2018-11-08 Thread welkam via Digitalmars-d-announce

On Thursday, 8 November 2018 at 18:52:02 UTC, H. S. Teoh wrote:

length is getting ridiculous


Having better editor support is nice but by "use better editor" 
you meant use vim dont you? And even if I switch to vim it wont 
solve my initial objection to one letter variable names. Its 
needless hurdles. Not to mention the next person new to this will 
likely have same problems like me. And the person after that etc. 
Which comes to my recurring thought when dealing with dmd. How 
the f@#k should I knew that? Documentation and instructions 
around D project is almost non existant. Does idea pit of success 
not apply to compiler?



Human brain is good at finding patterns. Its also good at finding 
patters it wants to find where they dont exist. Your statement 
that humans have no problems in disambiguating language is 
completely false. Most people just ignore logical conflicts or 
lack of information needed to correctly understand what is being 
said. Most people dont put lots of effort in understanding what 
exactly is being said and humans are bad at conveying their 
thoughts trough words to begin with. Extreme case of this is all 
forms of religious believes. Here is a clip where people give 
definitions of God and none of them are the same

https://youtu.be/HhRo9ABvef4
and here is one where J.P. at least tries to disambiguate words.
https://youtu.be/q0O8Jw6grro

When people talk about God first they cant tell precisely what 
they believe. Second they dont know precisely what others 
believe. And third it doesnt even matter as long as you make 
vaguely sounding sentences. Same extends to the rest of human 
interactions and humans would happily go without noticing that 
until they have to interact with computers where you have to 
define everything precisely.


Your second idea that shorter words have less information is... 
just... What? English is not floating point where length dictates 
precision. In German maybe with one word created from combining 
multiple but not in English.


Then you combined your both flawed ideas to produce paragraph 
where good and bad ideas are mixed together.


No I did not strawman. I took Walters advice from NWCPP talk 
precisely to show a flow in it. If variable name lenght should be 
related to scope then changing scope should change variable name 
lenght. You on the other hand changed advice to binary advice. 
Either local and short or global and verbose

NWCPP talk
https://youtu.be/lbp6vwdnE0k?t=444

Code is read more often than written and should be optimized for 
that. One letter variable names are not descriptive enough. In 
short functions you can get away from paying mental price but in 
long ones you do not.


Re: Backend nearly entirely converted to D

2018-11-08 Thread welkam via Digitalmars-d-announce
On Thursday, 8 November 2018 at 18:15:55 UTC, Stanislav Blinov 
wrote:


One keystroke (well ok, two keys because it's *) ;)
https://dl.dropbox.com/s/mifou0ervwspx5i/vimhl.png



What sorcery is this? I need to know. I guess its vim but how 
does it highlight symbols?


Re: Backend nearly entirely converted to D

2018-11-08 Thread welkam via Digitalmars-d-announce

On Wednesday, 7 November 2018 at 22:08:36 UTC, H. S. Teoh wrote:


I don't speak for the compiler devs, but IMO, one-letter 
variables are OK if they are local, and cover a relatively 
small scope.



By saying more descriptive I should have clarified that I meant 
to change them to 3-7 letter names. Small variable names are ok 
for small functions like the one in attrib.d called void 
importAll(Scope* sc). It has variable named sc and its clear 
where it is used.


Now for all of you who think that one letter variables are ok 
here is exercise. Go and open src/dmd/func.d with your favorite 
code editor. Find function FuncDeclaration resolveFuncCall(). Its 
around 170 LOC long. Now find all uses of variable Dsymbol s. Did 
you found them all? Are you sure? Ok now do the same for variable 
loc. See the difference?


Java-style verbosity IMO makes code *harder* to read because 
the verbosity gets > in your face, crowding out the more 
interesting (and important) larger picture of code structure.


What editor do you use?
Here is the worst example to prove my point but its still 
sufficient. All editors worth your time highlights the same text 
when selected and here is example of one letter variable.

https://imgur.com/a/jjxCdmh
and tree letter variable
https://imgur.com/a/xOqbkmn

where is all that crowding and loss of large picture you speak 
of? Its the opposite. Code structure is more clear with longer 
variable names than one letter.


As Walter said in his recent talk, the length of variable names 
(or identifiers in general, really) should roughly correspond 
to their scope


At best this is argument form authority. You said how thing 
should be not why. For argument sake imagine situation where you 
need to expand function. By your proposed rules you should rename 
local variables to longer names. Thats ridiculous. Yes I watched 
that presentation and fully disagree with Walter and know for 
sure he doesnt have sound argument to support his position.





Re: Backend nearly entirely converted to D

2018-11-08 Thread welkam via Digitalmars-d-announce
On Wednesday, 7 November 2018 at 22:03:20 UTC, Walter Bright 
wrote:
Single letter names are appropriate for locally defined 
symbols. There's also an informal naming convention for them, 
changing the names would disrupt that.


And where can i read about naming convention? My guess its not 
documented anywhere and would not be in foreseeable future or 
ever. Also are you sure you are not talking about two letter 
variables like

sc for scope
fd for function declaration
td for template declaration

because I am not proposing to change them. They are 26 times 
better than one letter names and changing them would not bring 
significant benefit. What I want to do is change variables like 
m. Try guessing what it is used for. Hint it is used for 
different things.


What I dont understand is that you are against changing variable 
names that would improve code understandability but you are not 
against changing for loops to foreach that add almost nothing to 
code readability and only look better.


What you dont know about me is that I worked as code 
reviewer/tester/merger at PHP shop and know full well why you 
want pull requests the way you want. I also know how much less 
easier it is to review simple changes like foreach loop changes 
and simple variable renaming


Re: Profiling DMD's Compilation Time with dmdprof

2018-11-08 Thread welkam via Digitalmars-d-announce

On Tuesday, 6 November 2018 at 19:01:58 UTC, H. S. Teoh wrote:
It looks like it would be really useful one day when I try to 
tackle the dmd-on-lowmem-system problem again.


Based on my profiling it seems that most memory is allocated in 
void importAll(Scope* sc) found in attrib.d . A person with more 
knowledge of DMD source could create new allocator for scope data 
and when its no longer needed just dealloc all. My intuition says 
that after IR is generated we no longer need scope information.


Here is profile data for simple file.
https://imgur.com/a/ROa6JNd




Re: Backend nearly entirely converted to D

2018-11-07 Thread welkam via Digitalmars-d-announce
On Wednesday, 7 November 2018 at 00:01:13 UTC, Walter Bright 
wrote:

On 11/6/2018 3:00 PM, H. S. Teoh wrote:
What sort of refactoring are we looking at?  Any low-hanging 
fruit here

that we non-compiler-experts can chip away at?


Simply going with foreach loops is a nice improvement.


Thas sounds like a job that I can do. At this moment I am reading 
DMD source code starting from main() because when I tried to 
understand part of something I wanted to change it felt that to 
understand something you basically need to understand a third of 
compiler.


One of biggest and needless hurdle I face in reading DMD code is 
single letter variable name. If I change one letter variable 
names to more descriptive ones would that patch be welcomed or 
considered needless change?


Re: Backend nearly entirely converted to D

2018-11-07 Thread welkam via Digitalmars-d-announce

On Wednesday, 7 November 2018 at 14:39:55 UTC, Joakim wrote:


I don't know why you think that would matter: I'm using the 
same compilers to build each DMD version and comparing the 
build times as the backend was translated to D


What did you compared is whether clang or DMD compiles code 
faster not whether D code compiles faster than C++. To check that 
you should compile both C++ and D with the same backend.


Re: NES emulator written in D

2018-02-04 Thread welkam via Digitalmars-d-announce

On Sunday, 4 February 2018 at 20:56:32 UTC, blahness wrote:
2. DMD just doesn't produce fast code compared to other modern 
compilers. It's a shame LDC or GDC isn't the default D compiler.


For the core team improving DMD codegen is not a priority


Re: NES emulator written in D

2018-02-04 Thread welkam via Digitalmars-d-announce
Could you share your experience with us? How it compares to go 
implementation? Did D made it harder or easier to implement 
emulator?




Re: v0.2.1 of EMSI's containers library

2015-08-31 Thread welkam via Digitalmars-d-announce

thanks for sharing


Re: Moving forward with work on the D language and foundation

2015-08-25 Thread welkam via Digitalmars-d-announce
On Monday, 24 August 2015 at 18:43:01 UTC, Andrei Alexandrescu 
wrote:

to fully focus on pushing D forward.


insert dick joke here