Re: John Regehr on "Use of Assertions"

2018-09-10 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 9 September 2018 at 21:20:11 UTC, John Carter wrote:
ie. Yes, everybody knows the words, everybody can read the 
code, everybody can find somebody who agrees with his intent 
and meaning but get a large enough group together to try 
agree on what actions, for example, the optimiser should take 
that are implied by that meaning... and flames erupt.


Well, it has less to do with "assert" than with how semantics are 
assigned to something that should never occur. But it isn't 
really optimal to hardcode failure semantics in the source code...


Even simple semantics, like whether tests should be emitted in 
release builds is possibly context dependent, so not really 
possible to determine with any level of certainty when writing a 
library reused across many projects.





Re: John Regehr on "Use of Assertions"

2018-09-09 Thread Ola Fosheim Grøstad via Digitalmars-d
On Sunday, 9 September 2018 at 06:27:52 UTC, Jonathan M Davis 
wrote:
So, maybe what we should do here is take a page from Weka's 
playbook and add a function that does something like check a 
condition and then assert(0) if it's false and not try to say 
that assertions should always be left in production.


That's the easy solution, but in real world development things 
get more complicated. Say, if you develop a game maybe you want 
no asserts in your render code, but as many asserts as possible 
in your game logic code...


 It would make sense to defer some way of control to the calling 
context. Especially with templated functions in a very generic 
library that is used across the entire program.




Re: John Regehr on "Use of Assertions"

2018-09-09 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 9 September 2018 at 08:31:49 UTC, John Carter wrote:
On Sunday, 2 September 2018 at 01:55:53 UTC, Walter Bright 
wrote:

On 9/1/2018 5:47 PM, Nick Sabalausky (Abscissa) wrote:

All in all, John is very non-committal about the whole thing.


He probably got tired of arguing about it :-)


Let's face it, the term "assert" has been poisoned by decades 
of ambiguity.


There is really no ambiguity... The terminology is widespread and 
well understood across the field I think.


"assume" means that something is taken as a given fact that has 
already been established by others.


"assert" means that it is something that should be established 
before shipping.


For contracts:

"expects"  (or "requires") means that the input to a function 
should have those properties. (precondition)


"ensures" means that the returned value should always have those 
properties. (postcondition)


Microsoft GSL let you configure pre/post conditions so that you 
either get terminate, throw or (much more dangerous) assume on 
contract violations. They don't seem to provide an option for 
ignoring it.



#if defined(GSL_THROW_ON_CONTRACT_VIOLATION)

#define GSL_CONTRACT_CHECK(type, cond)
 \
(GSL_LIKELY(cond) ? static_cast(0)  
 \
  : 
gsl::details::throw_exception(gsl::fail_fast( 
 \
"GSL: " type " failure at " __FILE__ 
": " GSL_STRINGIFY(__LINE__


#elif defined(GSL_TERMINATE_ON_CONTRACT_VIOLATION)

#define GSL_CONTRACT_CHECK(type, cond)
 \
(GSL_LIKELY(cond) ? static_cast(0) : 
gsl::details::terminate())


#elif defined(GSL_UNENFORCED_ON_CONTRACT_VIOLATION)

#define GSL_CONTRACT_CHECK(type, cond) GSL_ASSUME(cond)

#endif




Re: Dicebot on leaving D: It is anarchy driven development in all its glory.

2018-09-08 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 8 September 2018 at 14:20:10 UTC, Laeeth Isharc 
wrote:
Religions have believers but not supporters - in fact saying 
you are a supporter says you are not a member of that faith or 
community.


If you are a supporter of Jesus Christ's efforts, then you most 
certainly are a christian. If you are a supporter of the Pope, 
then you may or not may be catholic, but you most likely are 
christian or a sympathise with the faith.


Programming languages are more like powertools. You may be a big 
fan of Makita and dislike using other powertools like Bosch and 
DeWalt, or you may have different preferences based the 
situation, or you may accept whatever you have at hand. Being a 
supporter is stretching it though... Although I am sure that 
people who only have Makita in their toolbox feel that they are 
supporting the company.


Social institutions need support to develop - language is a 
very old human institution, and programming languages have more 
similarity with natural languages alongst certain dimensions 
(I'm aware that NLP is your field) than some recognise.


Sounds like a fallacy.

So, why shouldn't a language have supporters?  I give some 
money to the D Foundation - this is called providing support.


If you hope to gain some kind of return for it or consequences 
that you benefit from then it is more like obtaining support and 
influence through providing funds. I.e. paying for support...


It's odd - if something isn't useful for me then either I just 
move on and find something that is, or I try to directly act 
myself or organise others to improve it so it is useful.  I 
don't stand there grumbling at the toolmakers whilst taking no 
positive action to make that change happen.


Pointing out that there is a problem, that needs to be solved, in 
order to reach a state where the tool is applicable in a 
production line... is not grumbling.  It is healthy.  Whether 
that leads to positive actions (changes in policies) can only be 
affected through politics, not "positive action".  Doesn't help 
to buy a new, bigger and better motor, if the transmission is 
broken.




Re: Dicebot on leaving D: It is anarchy driven development in all its glory.

2018-09-06 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 6 September 2018 at 14:33:27 UTC, rikki cattermole 
wrote:
Either decide a list of conditions before we can break to 
remove it, or yes lets let this idea go. It isn't helping 
anyone.


Can't you just let mark it as deprecated and provide a library 
compatibility range (100% compatible). Then people will just 
update their code to use the range...


This should be possible to achieve using automated 
source-to-source translation in most cases.







Re: Dicebot on leaving D: It is anarchy driven development in all its glory.

2018-09-06 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 6 September 2018 at 11:01:55 UTC, Guillaume Piolat 
wrote:
Let me break that to you: core developer are language experts. 
The rest of us are users, that yes it doesn't make us 
necessarily qualified to design a language.


Who?





Re: John Regehr on "Use of Assertions"

2018-09-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 5 September 2018 at 19:35:46 UTC, Meta wrote:
I don't disagree. I think the only sane way to use asserts as 
an optimization guide is when the program will abort if the 
condition does not hold. That, to me, makes perfect sense, 
since you're basically telling the compiler "This condition 
must be true past this assertion point, because otherwise 
program execution will not continue past this point". You're 
ensuring that the condition specified in the assert is true by 
definition. Not having that hard guarantee but still using 
asserts as an optimization guide is absolutely insane, IMO.


Yes, if you have an advanced optimizer then it becomes dangerous. 
Although a prover could use asserts-with-a-hint for focusing the 
time spent on searching for proofs. It would be way too slow to 
do that for all asserts, but I guess you could single out some of 
the easier ones that are likely to impact performance. That would 
be safe.


There are some cases where "assume" makes sense, of course. For 
instance if you know that a byte-pointer will have a certain 
alignment then you can get the code gen to generate more 
efficient instructions that presume a certain alignment or if you 
can tell the compiler to "assume" that a region of memory is 
filled with zeros then maybe the optimizer can skip 
initialization when creating objects...


And it kinda make sense to be able to autogenerate tests for such 
assumptions for debugging. So it would be like an 
assert-turned-into-assume, but very rarely used...


This would be more of an expert tool for library authors and 
hardcore programmers than a general compiler-optimization.




Re: John Regehr on "Use of Assertions"

2018-09-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 3 September 2018 at 16:53:35 UTC, Meta wrote:
This battle has been fought over and over, with no movement on 
either side, so I'll just comment that nobody what John Nails 
or anyone else says, my personal opinion is that you're 100% 
wrong on that point :-)


Well, John Regehr seems to argue that you shouldn't use asserts 
for optimization even if they are turned on as the runtime might 
override a failed assert.


«As developers, we might want to count on a certain kind of 
behavior when an assertion fails. For example, Linux’s BUG_ON() 
is defined to trigger a kernel panic. If we weaken Linux’s 
behavior, for example by logging an error message and continuing 
to execute, we could easily end up adding exploitable 
vulnerabilities.»


So…



Re: D IDE

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 4 September 2018 at 20:45:53 UTC, H. S. Teoh wrote:
happens.  Things aren't going to materialize out of thin air 
just because people demand for it loudly enough, even if we'd 
like for that to happen.  *Somebody* has to do the work. That's 
just how the universe works.


Human beings are social in nature and follow a group-mentality, 
so when there are people willing to follow and you have a 
persuasive leader willing to lead and outline a bright and vivid 
future, then things can "materialize" out of thin air.


But it is not going to happen without focused leadership and the 
right timing.


There certainly have been people in the forums over the years 
looking for things to do.  So I really don't believe the whole 
"nothing will happen because nothing happens by itself" mantra.


What is almost always the case is that if you base an open source 
sub-project on only 1-2 people then that work is mostly wasted, 
as they most likely will leave it unmaintained. So leadership is 
critical, even when things do "materialize" out of thin air.







Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d
On Tuesday, 4 September 2018 at 13:34:03 UTC, 
TheSixMillionDollarMan wrote:
I think D's 'core' problem, is that it's trying to compete 
with, what are now, widely used, powerful, and well supported 
languages, with sophisticate ecosystems in place already. 
C/C++/Java/C# .. just for beginners.


Yes, I believe there was an academic early on that allowed 
students to use D, but when C++17 (etc) came about he held the 
view that it would be better for D to align its semantics with 
C++. He was met with silence, except me, who supported that view. 
D is too much like C++ for a skilled modern C++ programmer to 
switch over, but D semantics are also too different to compile to 
C++ in a useful manner.


Then it's also trying to compete with startup languages (Go, 
Rust ) - and some of those languages have billion dollar 
organisations behind them, not to mention the talent levels of 
their *many* designers and contributors.


Ok, so my take on this is that Rust is in the same group as D 
right now,  and I consider it experimental as I am not convinced 
that it is sufficient for effective low level programming. 
Although Rust has more momentum, it depends too much on a single 
entity (with unclear profitability) despite being open sourced, 
just like D. Too much singular ownership. Go is also open source 
in theory, but if we put legalities aside then I think it is 
having the traits of a proprietary language. They are building up 
a solid runtime, and it has massive momentum within services, but 
the language itself is somewhat primitive and messy. Go could be 
a decent compilation target for a high level language.


That said , I think most languages don't compete directly with 
other languages, but compete within specific niches.


Rust: for writing services and command line programs where C++ 
would have been a natural candidate, but for people who want a 
higher level language or dislike C++.


Go: for writing web-services where Python, Java  and C# is 
expected to be too resource-intensive.


D: based on what seems to be recurring themes in the forums D 
seems to be  used by independent programmers (personal taste?) 
and programmers in finance that find interpreted languages too 
slow and aren't willing to adopt C++.


C++ is much more than just a langauge. It's an established, 
international treaty on what the language must be.


Yes, it is an ISO-standard, and evolve using a global 
standardisation community as input. As such it evolves with the 
feedback from a wide range of user groups by the nature of the 
process.


That is not a statement about the quality of D. It's a 
statement about the competitive nature of programming languages.


It kinda is both, but the issue is really what you aim to be 
supporting and what you do to move in that direction.


When there is no focus on any particular use case, just language 
features, then it becomes very difficult to move and difficult to 
engage people in a way that make them pull in the same direction.



I wonder has already happened to D.


No, it mostly comes down to a lack of focus and a process to back 
it up. Also, memory management should be the first feature to 
nail down, should come before language semantics...



I just do not see, how D can even defeat its' major competitors.


But are they really competitors? Is D predominantly used for 
writing web-services? What is D primarily used for? Fast 
scripting-style programming?


Instead D could be a place where those competitors come to look 
for great ideas (which, as I understand it, does occur .. 
ranges for example).


No, there are thousands of such languages. Each university has a 
handful of languages that they create in order to back their 
comp.sci. research.


No need to focus on performance in that setting.

You seem to be saying that, raising money so you can pay 
people, is enough.


But I wonder about that.


There has to be a focus based on analysis of where you can be a 
lot better for a specific use scenario, define the core goals 
that will enable something valuable for that scenario, then cut 
back on secondary ambitions and set up a process to achieve those 
core goals (pertaining to a concrete usage scenario).


Being everything for everybody isn't really a strategy. Unless 
you are Amazon, and not even then.


Without defining a core usage scenario you cannot really evaluate 
the situation or the process that has to be set up to change the 
situation...


Well, I've said this stuff many times before.



Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 24 August 2018 at 13:21:25 UTC, Jonathan M Davis wrote:
On Friday, August 24, 2018 6:05:40 AM MDT Mike Franklin via 
Digitalmars-d wrote:
> You're basically trying to bypass the OS' public API if 
> you're trying to bypass libc.


No I'm trying to bypass libc and use the OS API directly.


And my point is that most OSes consider libc to be their OS API 
(Linux doesn't, but it's very much abnormal in that respect).


Well, it used to be the case that it was normal to call OS 
directly by using traps, but since the context switch is so 
expensive on modern CPUs we a have a situation where the calling 
stub is a fraction of the calling cost these days. Thus most 
don't bother with it.


What usually can happen if you don't use the c-stubs with dynamic 
linkage is that your precompiled program won't work with new 
versions of the OS. But that can also happen with static linkage.


Trying to bypass it means reimplementing core OS functionality 
and risking all of the bugs that go with it.


It is the right thing to do for a low level language. Why have 
libc as a dependency if you want to enable hardware oriented 
programming? Using existing libraries also put limits on low 
level language semantics.


If you're talking about avoiding libc functions like strcmp 
that's one thing, but if you're talking about reimplementing 
stuff that uses syscalls, then honestly, I think that you're 
crazy.


No it isn't a crazy position, why the hostile tone? Libc isn't 
available in many settings. Not even in webassembly.





Re: This thread on Hacker News terrifies me

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d
On Monday, 3 September 2018 at 02:58:01 UTC, Nick Sabalausky 
(Abscissa) wrote:
In the 50's/60's in particular, I imagine a much larger 
percentage of programmers probably had either some formal 
engineering background or something equally strong.


I guess some had, but my impression it that it was a rather mixed 
group (probably quite a few from physics since they got to use 
computers for calculations).


I have heard that some hired people with a music background as 
musicians understood the basic algorithmic ideas of instructions 
and loops. I.e. how to read and write instructions to be followed 
(sheet-music).


Programming by punching in numbers was pretty tedious too... so 
you would want someone vry patient.




Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d
The first search engines were created in 1993, google came 
along in 1998 after at least two dozen others in that list, and 
didn't make a profit till 2001. Some of those early competitors 
were giant "billion dollar global companies," yet it's google 
that dominates the web search engine market today.


Why is that?


Their original page-rank algorithm. Basically, they found an 
efficient way of emulating random clicks to all outgoing links 
from a page and thus got better search result rankings.


It was a matter of timing.



Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 3 September 2018 at 16:41:32 UTC, Iain Buclaw wrote:
15 years ago, people were complaining that there was only one D 
compiler.  It is ironic that people now complain that there's 
too many.


One needs multiple implementations to confirm the accuracy of the 
language specification. D still has one implementation, i.e. one 
compiler with multiple backends, distributed as multiple 
executables (with tweaks).


Anyway, I think people complained about the first and only 
compiler being non-free. That's not relevant  now, of course.




Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 4 September 2018 at 01:36:53 UTC, Mike Parker wrote:

D is not a petri dish for testing ideas. It's not an experiment.


Well, the general consensus for programming languages is that it 
a language is experimental
 (or proprietary) until it is fully specced out as a stable 
formal standard with multiple _independent_ implementations...




Re: This thread on Hacker News terrifies me

2018-09-02 Thread Ola Fosheim Grøstad via Digitalmars-d
On Sunday, 2 September 2018 at 04:59:49 UTC, Nick Sabalausky 
(Abscissa) wrote:
A. People not caring enough about their own craft to actually 
TRY to learn how to do it right.


Well, that is an issue. That many students enroll into 
programming courses, not because they take pride in writing good 
programs, but because they think that working with computers 
would somehow be an attractive career path.


Still, my impression is that students that write good programs 
also seem to be good at theory.


B. HR people who know nothing about the domain they're hiring 
for.


Well,  I think that goes beyond HR people. Also lead programmers 
in small businesses that either don't have an education or didn't 
do too well, will feel that someone that does know what they are 
doing is a threat to their position. Another issue is that 
management does not want to hire people who they think will get 
bored with their "boring" software projects... So they rather 
hire someone less apt that will not quit the job after 6 months...


So there are a lot of dysfunctional aspects at the very 
foundation of software development processes in many real world 
businesses.


I wouldn't expect anything great to come out of this... I also 
suspect that many managers don't truly understand that one good 
programmer can replace several bad ones...



C. Overall societal reliance on schooling systems that:

- Know little about teaching and learning,

- Even less about software development,


Not sure what you mean by this. In many universities you can sign 
up for the courses you are interested in. It is really up to the 
student to figure out what their profile should be.


Anyway, since there are many methodologies, you will have to 
train your own team in your specific setup. With a well rounded 
education a good student should have the knowledge that will let 
them participate in discussions about how to structure the work.


So there is really no way for any university to teach you exactly 
what the process should be like.


This is no different from other fields.  Take a sawmill; there 
are many ways to structure the manufacturing process in a 
sawmill. Hopefully people with an education is able to grok the 
process and participate in discussions about how to improve it, 
but the specifics is dependent on the concrete sawmill production 
line.






Re: This thread on Hacker News terrifies me

2018-09-01 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 1 September 2018 at 11:32:32 UTC, Jonathan M Davis 
wrote:
I'm not sure that I really agree that software engineering 
isn't engineering, but the folks who argue against it do have a 
point in that software engineering is definitely not like most 
other engineering disciplines, and good engineering practices 
are nowhere near as well-defined in software engineering as 
those in other engineering fields.


Most engineering fields have a somewhat stable/slow moving 
context in which they operate.


If you have a specific context (like banking) then you can 
develop a software method that specifies how to build banking 
software, and repeat it, assuming that the banks you develop the 
method for are similar


Of course, banking has changed quite a lot over the past 15 years 
(online + mobile). Software often operates in contexts that are 
critically different and that change in somewhat unpredictable 
manners.


But road engineers have a somewhat more stable context, they can 
adapt their methodology over time. Context does change even 
there, but at a more predictable pace.


Of course, this might be primarily because computers are new. 
Businesses tend to use software/robotics in a disruptive manner 
to get a competitive edger over the competitors. So the users 
themselves creates disruptive contexts in their search for the 
"cutting edge"  or "competitive edge".


As it becomes more and more intertwined into how people do 
business it might become more stable and then you might see 
methods for specific fields that are more like engineering in 
older established fields. (like building railways).




Re: This thread on Hacker News terrifies me

2018-09-01 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 31 August 2018 at 23:20:08 UTC, H. S. Teoh wrote:
The problem is that there is a disconnect between academia and 
the industry.


No, there isn't. Plenty of research is focused on software 
engineering and software process improvement. Those are separate 
research branches.


The problem is that most people in forums don't have a basic 
grasp of how improbable it is for complex systems to be bug free.


After taking a course where you formally prove a program to be 
bug free fix that problem real fast.


Also, asserts doesn't prevent bugs, they can trip hours or days 
after the problem arose. The problem doesn't have to be the code 
either, it could be the data-structure, how different systems 
interact etc. So, just running the buggy software is problematic, 
even before the assert trips. (E.g. your plane could be in flames 
before the assert trips.)


Also, having a shutdown because a search function sometimes 
return the wrong result is just as unacceptable as getting late 
to work because you couldn't drive your car because it "was 
asserted" that the clutch was not in perfect condition.


The goal in academia is to produce new research, to find 
ground-breaking new theories that bring a lot of recognition


Most research does not focus on ground-breaking theories, but 
marginal improvments on existing knowledge.


 > continue existing. As a result, there is heavy focus on the
theoretical concepts, which are the basis for further research, 
rather than pragmatic tedium like how to debug a program.


All formalized knowledge that is reusable is theory. Everything 
you can learn from books is based on theory, unless you only read 
stories and distill knowledge from them with no help from the 
author.


There are of course people who build theory on how to structure, 
test and debug programs.



The goal in the industry is to produce working software.


NO, the goal is to make MONEY. If that means shipping bad 
software and getting to charge large amounts of money for 
consulting then that is what the industry will go for.



using the latest and greatest theoretical concepts.  So they 
don't really care how good a grasp you have on computability 
theory, but they *do* care a lot that you know how to debug a 
program so that it can be shipped on time. (They don't care 
*how* you debug the program, just that you know how to to it, 
and do it efficiently.)


They don't care if it works well, they don't care if it is slow 
and buggy, they care about how the people who pay for it 
respond...


A consequence of this disconnect is that the incentives are set 
up all wrong.


Yes, money does not produce quality products, unless the majority 
of customers are knowledgable and demanding.


Professors are paid to publish research papers, not to teach 
students.


I think people publish most when they try to become professors. 
After getting there the main task is to teach others the topic 
and help others to do research (e.g. supervising).


 > research.  After all, it's the research that will win you the
grants, not the fact that you won teaching awards 3 years in a 
row, or that your students wrote a glowing review of your 
lectures. So the quality of teaching already suffers.


The have a salary, they don't need a grant for themselves. They 
need a grant to get a salary to fund ph.d. students. E.g. they 
need grants to teach how to do research.


Then the course material itself is geared toward producing more 
researchers, rather than industry workers.


Most of them become industry workers...

Is it any wonder at all that the quality of the resulting 
software is so bad?


The current shipping software is much better than the software 
that was shipped in the 80s and 90s. To a large extent thanks to 
more and more software being developed in high level languages 
using well tested libraries and frameworks (not C, C++ etc).


Most cars on the road are somewhat buggy, and they could crash as 
a result of those bugs. Most programs are somewhat buggy, and 
they could crash as a result of those bugs.


Most of the buggy air fighters are on the ground, but would you 
want you car to be unusable 20% of the time?


Would you want Google to disappear 20% of the time because the 
ranking of search results is worse than what the spec says it 
should be?


As usual, these discussions aren't really based on a good 
theoretical understanding of what a bug is.


Try to prove one of your programs formally correct, then you'll 
realize that it is unattainable for most useful programs that 
retain state.


The human body is full of bugs. Hell, we rely on them, even on 
the cell-level. We need those parasites to exist. As programs get 
larger you need to focus on a different strategy to get a 
reliable system. (E.g. actor based thinking).




Re: This thread on Hacker News terrifies me

2018-08-31 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 1 September 2018 at 05:51:10 UTC, rikki cattermole 
wrote:
Then there are polytechnics which I went to for my degree, 
where the focus was squarely on Industry and not on academia at 
all.


But the teaching is based on research in a good engineering 
school...


But in saying that, we had third year students starting out not 
understanding how cli arguments work so...


Proper software engineering really takes 5+ years just to get 
started, 10+ to become actually good at it. Sadly that won't be 
acceptable in our current society.


The root cause of bad software is that many programmers don't 
even have an education in CS or software engineering, or didn't 
do a good job while getting it!


Another problem is that departments get funding based on how many 
students they have and many students are not fit to be 
programmers. Then you have the recruitment process and people in 
management without a proper theoretical understanding of the 
topic looking for "practical programmers" (must have experience 
with framework X) which basically means that they get programmers 
with low theoretical understanding and therefore fail to build an 
environment where people can learn... So building a good team 
where people can become experts (based on actual research) is 
mostly not happening. It becomes experience based and the 
experience is that it isn't broken if customers are willing to 
pay.


Basic capitalism. Happens outside programming too. Make 
good-looking shit that breaks after the warranty is void.


Anyway, Software Engineering most certainly is a research 
discipline separate from CS and there is research and theory for 
developing software at different cost levels.


Games are not bug free because that would be extremely expensive, 
and cause massive delays in shipping which makes it impossible to 
plan marketing. Games are less buggy when they reuse existing 
frameworks, but that makes for less exciting designs.




Re: Is there any good reason why C++ namespaces are "closed" in D?

2018-07-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 30 July 2018 at 03:05:44 UTC, Manu wrote:
In 20 years, I've never seen 2 namespaces in the same file. And 
if the C++
code were like that, I'd break it into separate modules anyway, 
because

that's what every D programmer would expect.


It is common in C++ to have multiple hierarchical namespaces in 
the same file...





Re: C's Biggest Mistake on Hacker News

2018-07-25 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 24 July 2018 at 07:19:21 UTC, Paolo Invernizzi wrote:
If I'm not wrong, Python has grown up really slowly and 
quietly, till the recent big success in the scientific field.


Python replaced Perl and to some extent Php... Two very 
questionable languages, which was a good starting-point. C++98 
was a similarly questionable starting-point, but nobody got the 
timing right when that window was open.


The success of Python in scientific computing is to a large 
extent related to C though.




Re: Sutter's ISO C++ Trip Report - The best compliment is when someone else steals your ideas....

2018-07-15 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 July 2018 at 12:55:33 UTC, Adam D. Ruppe wrote:
You use process isolation so it is easy to restart part of it 
without disrupting others. Then it can crash without bringing 
the system down. This is doable with segfaults and range 
errors, same as with exceptions.


This is one of the most important systems engineering 
principles: expect failure from any part, but keep the system 
as a whole running anyway.


If we are talking about something application-specific and in 
probabilistic terms, then yes certainly.


But that is not the absolutist position where any failure should 
lead to a shutdown (and consequently a ban on reboot as the 
failed assert might happen hours after the actual buggy code 
executed).


The absolutist position would also have to assume that all 
communicated state is corrupted so a separate process does not 
improve the situation. Since you don't know with a 100% certainty 
what the bug consists of you should not retain any state from any 
source after the _earliest_ time where the buggy logic in theory 
could have been involved. All databases should be assumed 
corrupted, no messages should be accepted etc (messages and 
databases are no different from memory in this regard).


In reality absolutist positions are usually not possible to 
uphold so you have to move to a probabilistic position. And the 
compiler cannot make probabilistic assumptions, you need a lot of 
contextual understanding to make those probabilistic assessment 
(e.g. the architect or programmer has to be involved).


Fully reactive systems does not retain state of course, and those 
would change the argument somewhat, but they are very rare... 
mostly limited to control systems (cars, airplanes etc).


The idea behind actor-based programming (e.g. Erlang) isn't that 
bugs don't occur or that the overall system will exhibit correct 
behaviour, but that it should be able to correct or adapt to 
situations despite bugs being present.  But that is really, 
predominantly, not available to us with the very "crisp" logic we 
use in current languages (true/false, all or nothing). Maybe 
something better will come out of probabilistic programming 
paradigms and software synthesis some time in the future. Within 
the current paradigms we are stuck with the judgment of the 
humans involved.


Interestingly biological systems are much better at robustness, 
fault tolerance and self-healing, but that involves a lot of 
overhead and also assumes that some failures are acceptable as 
long as the overall system can recover from it. Actor-programming 
is based on the same assumption, the health of the overall (big) 
system.


Re: Sutter's ISO C++ Trip Report - The best compliment is when someone else steals your ideas....

2018-07-15 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 11 July 2018 at 22:35:06 UTC, crimaniak wrote:
The people who developed Erlang definitely have a lot of 
experience developing services.


Yes, it was created for telephone-centrals. You don't want a 
phone central to go completely dead just because there is a bug 
in the code. That would be a disaster. And very dangerous too 
(think emergency calls).


The crucial point is whether you can depend on the error being 
isolated, as in Erlang's lightweight processes. I guess D 
assumes it isn't.
 I think if we have a task with safe code only, and 
communication with message passing, it's isolated good enough 
to make error kill this task only. In any case, I still can 
drop the whole application myself if I think it will be the 
more safe way to deal with errors. So paranoids do not lose 
anything in the case of this approach.


Yup, keep critical code that rarely change, such as committing 
transactions, in a completely separate service and keep 
constantly changing code where bugs will be present separate from 
it.


Anyway, completey idiotic to terminate a productivity application 
because an optional editing function (like a filter in a sound 
editor) generates a division-by-zero. End users would be very 
unhappy.


If people want access to a low-level programming language, then 
they should also be able to control error-handling. Make 
tradeoffs regarding denial-of-service attack-vectors and 100% 
correctness (think servers for entertainment services like game 
servers).


What people completely fail to understand is that if an assert 
trips then it isn't sufficient to reboot the program. So if an 
assert always should lead to shut-down, then it should also 
prevent the program from being run again, using the same line of 
argument. The bug is still there. That means that all bugs leads 
to complete service shutdown, until the bug has been fix, and 
that would make for a very shitty entertainment experience and 
many customers lost.






Re: D beyond the specs

2018-03-18 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 18 March 2018 at 07:06:37 UTC, Ali Çehreli wrote:
It may not be distributor greed: I was one of the founders of a 
WordPerfect distributor in Turkey in around 1991.


Cool :-)

I don't know whether it was the US government rules or 
WordPerfect rules but they simply could not sell us anywhere 
near what US consumers were paying. $500 in Turkey is still an 
impossibly high price.


*nods*  I find it kinda interesting that the global distribution 
that came with the Internet may have made it more difficult to 
differentiate prices, both ways.  Also harder to sell with lower 
margins in 3rd world countries.  E.g. on Amazon you can now find 
cheaper reprints of textbooks targeting universities in India...


Of course, localized software (language barrier) may still be 
used to differentiate.




Re: D beyond the specs

2018-03-17 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 17 March 2018 at 09:31:58 UTC, Ola Fosheim Grøstad 
wrote:
I don't know about compilers specifically, but the big 
distributors in Europe charged some hefty margins on their 
imports. So pricing in US was often much lower than here...


When I think of it, the distributors probably only cared about 
corporate customers for software development (and my impression 
is that distributors often didn't know much about computers and 
software anyway). Since distributors didn't know better they 
hired young computer enthusiasts to work for them, which cracked 
the software protections and spread it among their friends before 
the software hit the stores...


So European computer enthusiasts had easy access to bootleg 
copies of common software. Copying was rampant for cultural 
reasons, which included common fair use clauses that allowed 
copying between individuals and friends. By rampant, I mean 
people copied >90% of the software they used.


I knew of more people that bought "alternative dev tooling" (at 
reasonable pricing) than the offerings  from big players (which 
often would cost more than the computer hardware, and as a 
recurring cost...). There was also an attitude that "if the price 
is unreasonable high then it is perfectly reasonable and moral to 
distribute bootleg copies of it".





Re: D beyond the specs

2018-03-17 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 16 March 2018 at 22:43:57 UTC, Chris wrote:
Most interesting! I'm not kidding. Is it 'wow it's from the 
US', or something else? Genuine question. I ain't asking for 
fun. There's more to business and technology than meets the eye.


I don't know about compilers specifically, but the big 
distributors in Europe charged some hefty margins on their 
imports. So pricing in US was often much lower than here...


So smaller competitors with ads in national hobbyist computing 
mags had a relatively easy marketing channel.




Re: D beyond the specs

2018-03-17 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 16 March 2018 at 22:25:50 UTC, jmh530 wrote:
This sort of analysis applies to programming languages in 
exactly the same way. If I'm a company, do I build products 
using language X or language Y. If I'm a person, do I spend N 
hours learning language X or language Y (or do the next best 
thing you can do...March Madness?). What if I already know 
language X? Then it's pure marginal cost to learn language Y.


For me I'd say learning the language is the low cost. The high 
cost is in finding the tooling, the eco system, maintaining the 
configuration and being sure that it is supported over time and 
that it can target the platform I am likely to be interested in 
(now and an in the future).


So, I write toy programs for new languages to see what they are 
like out of curiosity, but I am not likely to adopt any language 
that doesn't have a dedicated IDE. I'm not interested in taking a 
hit on tooling-related costs.


That last part has actually makes me reluctant to adopt Rust, 
Dart and Angular. So I'd say the threshold for moving from 
"non-critical" usage to "critical" usage is quit high.


On the other hand I have a lot less resistance to adopting 
TypeScript, since it is a fairly thin layer over Javascript. Thus 
I can easily move away from it if it turns out to be limiting 
down the road.


C programmers don't just switch to D or Rust or whatever the 
moment they see it has some "technical" features that are 
better. That's not what we observe. The marginal benefit has to 
exceed the marginal cost.


Actually, I'd say no marginal benefit is worth moving away from a 
platform with quality tooling.


So you have to win on productivity and performance (memory, 
portability, speed).




Re: D beyond the specs

2018-03-17 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 17 March 2018 at 08:48:45 UTC, Ola Fosheim Grøstad 
wrote:
Anyway, cultural change is slow. Even though the 70s is far 
away, it still probably has an effect on culture and attitudes 
in universities and the tech sector.


In the late 80s I was quite surprised that Danish computing mags 
(for hobbyists) wrote a lot about a language I had never heard of 
before:


https://en.wikipedia.org/wiki/COMAL

I am sure there has been other similar trends in other European 
countries.




Re: D beyond the specs

2018-03-17 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 16 March 2018 at 14:50:26 UTC, Paulo Pinto wrote:
Well, Algol, Pascal, Oberon, Component Pascal, VHDL, Ada are 
all examples of programming languages successfully used in 
Europe, while having adoption issues on US.


There are some historical roots, I believe. In the 60s and 70s 
computing was so expensive that government funding was a driving 
force. Since each nation then also wanted to have their own 
computing industry they favour national companies (and thus 
employment), so each nation had their own CPU/hardware 
architecture and eco system. And Europe has many many nations... 
So quite a heterogenous computing environment... :-P


The US has a much bigger internal market and some key big players 
early on ("nobody has been fired for picking IBM"). They also 
have many large corporation that could sustain the cost of 
establishing a commercial computing sector.


Not sure how that works out today, though, as there is no longer 
a strong focus on national computing industries (unless you count 
Apple and Microsoft as such). Asia has run away with the hardware 
and development software has become less and less 
proprietary/national each decade.


My perception is that there is a difference between academic 
research oriented institutions and more rural engineering 
institutions. The former would focus more on language qualities 
(surprisingly University of Oslo is now switching from Java to 
Python, probably because it used a lot in data analysis), while 
the latter would focus more on business marketable languages 
(C++).


Anyway, cultural change is slow. Even though the 70s is far away, 
it still probably has an effect on culture and attitudes in 
universities and the tech sector.


Also, since most applications are no longer CPU bound there 
should be much more opportunity for trying new options today than 
10-20 years ago.






Re: C++ launched its community survey, too

2018-02-28 Thread Ola Fosheim Grøstad via Digitalmars-d
On Tuesday, 27 February 2018 at 20:33:18 UTC, Jonathan M Davis 
wrote:
The other problem is that many of C++'s problems come from 
being a superset of C, which is also a huge strength, and it 
would be a pretty huge blow to C++ if it couldn't just #include 
C code and use it as if it were C++. To truly fix C++ while 
retaining many of its strengths would require fixing C as well, 
and that's not happening.


I think you can have C plus another language just fine.

In C++ you use extern "C" as a wrapper, you could easily run a 
completely different parser within an extern "C" block and build 
a different type of AST for it.


This was probably out of reach in the 80s and early 90s when 
compiler resources was an issue, but with LLVM and gigabytes of 
RAM I think it would be quite reasonable to do something like 
that.


People don't do it because the basic counter argument always is 
"but we already support C through linkage", but that doesn't mean 
that there would be no real productivity advantage to building C 
into the language.


Not that I would do it if I designed a system level language, 
mostly because I think it would be better to build tools for 
transpiling code from C to the new language. C is a dead end now, 
I think.


Ola



Re: Go 2017 Survey Results

2018-02-27 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 27 February 2018 at 15:30:44 UTC, JN wrote:
I'm no expert on programming language design, but I think there 
is a difference between templates and generics. At the basic 
level they are used for similar things - specializing container 
types, etc. But templates usually allow much more, so they end 
up being layers of layers of compile time code, e.g. D and 
Boost, which while nifty has it's compilation time cost.


Generic programming just means that you leave out the 
specification for some types in the code for later.


C++-style templates like D has is one solution to generic 
programming.  When some people write «generics» they probably 
think of Java and something that is like that...




Re: Dub, Cargo, Go, Gradle, Maven

2018-02-16 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 16 February 2018 at 19:40:07 UTC, H. S. Teoh wrote:
On Fri, Feb 16, 2018 at 07:40:01PM +, Ola Fosheim Grøstad 
via Digitalmars-d wrote:

On Friday, 16 February 2018 at 18:16:12 UTC, H. S. Teoh wrote:
> The O(N) vs. O(n) issue is actually very important once you

I understand what you are trying to say, but this usage of 
notation is
very confusing. O(n) is exactly the same as O(N) if N relates 
to n by

a given percentage.


N = size of DAG
n = size of changeset

It's not a fixed percentage.


Well, for this comparison to make sense asymptotically you have 
to consider how n grows when N grows towards infinity. Basically 
without relating n to N we don't get any information from O(n) vs 
O(N).


If you cannot bound n in terms of N (lower/upper) then O(n) is 
most likely either O(1)  or O(N) in relation to N... (e.g. there 
is a constant upper limit to how many files you modify manually, 
or you rebuild roughly everything)


Now, if you said that at most O(log N) files are changed, then 
you could have an argument in terms of big-oh.




Re: Dub, Cargo, Go, Gradle, Maven

2018-02-16 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 16 February 2018 at 18:16:12 UTC, H. S. Teoh wrote:

The O(N) vs. O(n) issue is actually very important once you


I understand what you are trying to say, but this usage of 
notation is very confusing. O(n) is exactly the same as O(N) if N 
relates to n by a given percentage.





Re: Flow-Design: OOP component programming

2018-02-14 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 14 February 2018 at 18:43:34 UTC, Mark wrote:
Luna [1], a new programming language that was recently 
mentioned on Reddit, also appears to take this "flow-oriented 
design" approach. It's purely functional, not OO, and I'm 
curious to see how it evolves.


[1] http://www.luna-lang.org



Many flow based languages...

https://en.wikipedia.org/wiki/Dataflow

https://en.wikipedia.org/wiki/Max_(software)

https://en.wikipedia.org/wiki/Reactive_programming



Re: Being Positive

2018-02-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 13 February 2018 at 18:25:25 UTC, 9il wrote:
Do we follow main initial promise to be better / to replace 
C/C++?


The last one question is the most important. Instead of 
targeting a real market for as, which is C/C++, we are building 
a "trendy" language to compete with Python/Java/Go/Rust. Trends 
are changing but C/C++ market are waiting us!


That is probably true. Also in the convenience department any 
small language will have problems reaching a high valuation...


Rust seems to be able to pick up some of the C market though.



Re: Quora: Why hasn't D started to replace C++?

2018-02-12 Thread Ola Fosheim Grøstad via Digitalmars-d
On Monday, 12 February 2018 at 09:04:57 UTC, psychoticRabbit 
wrote:
The best C++ can do, is (slowly) think about copying what 
others are doing..to the extent C++ can handle it, and to the 
extent the 'committees' agree to doing it.


Well, most languages are like that... including D.



Re: Quora: Why hasn't D started to replace C++?

2018-02-11 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 11 February 2018 at 15:07:00 UTC, German Diago wrote:
If it is not implementable (it is complex, I agree), why there 
are 3 major compilers?


At least 4: Intel, Microsoft, GCC and Clang. Then you have EDG 
and IBM, probably more.






Re: Which language futures make D overcompicated?

2018-02-09 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 9 February 2018 at 20:49:24 UTC, Meta wrote:
was a complicated language, 99 of them would say no. If you ask 
100 Python programmers, 99 would probably say yes.


Yes, but objectively speaking I'd say modern Python is more 
complicated than C++ and D.


What Python got right is that you don't have to deal with the 
complicated stuff unless you are hellbent on dealing with it. 
Python affords a very smooth incremental learning curve, but it 
is still a long learning curve...





Re: Quora: Why hasn't D started to replace C++?

2018-02-09 Thread Ola Fosheim Grøstad via Digitalmars-d
On Friday, 9 February 2018 at 14:49:27 UTC, Jonathan M Davis 
wrote:
On Friday, February 09, 2018 14:01:02 Atila Neves via 
Digitalmars-d wrote:
Unit tests are a great idea, right? Try convincing a group of 
10 programmers who have never written one and don't know 
anyone else who has. I have; I failed.


The amazing thing is when a programmer tries to argue that 
having unit tests makes your code worse - not that they take 
too long to write or that they don't want to bother with them 
or any that - that they somehow make the code worse.


- Jonathan M Davis


Actually, it can make lazy programmers write worse code if they 
write code to satisfy the tests instead of writing code according 
to the spec...




Re: A betterC base

2018-02-08 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 8 February 2018 at 19:51:05 UTC, bachmeier wrote:
The developers working on .NET had the opportunity to learn 
from Java, yet they went with GC.[0] Anyone that says one 
approach is objectively better than the other is clearly not 
familiar with all the arguments - or more likely, believes 
their problem is the only real programming problem.


Reference counting isn't a general solution, and it is very slow 
when you allow flexible programming paradigms that generate lots 
of objects.


So, it all depends on how much flexibility you want to allow for 
your programmers and still having reasonable performance.


(The vast majority of high level programming languages use GC and 
has done so since the 60s.)




Re: Annoyance with new integer promotion deprecations

2018-02-06 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 6 February 2018 at 07:18:59 UTC, Walter Bright wrote:
Except for 16 bit shorts. Shorts will exact a penalty :-) and 
so shorts should only be used for data packing purposes.


Which CPU model are you thinking of?





Re: Annoyance with new integer promotion deprecations

2018-02-06 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 5 February 2018 at 21:38:23 UTC, Simen Kjærås wrote:
If you were negating a byte, the code does have compile-time 
known values, since there's a limit to what you can stuff into 
a byte. If you weren't, the warning is warranted. I will admit 
the case of -(-128) could throw it off, though.


D has modular arithmetics across the board so there is no limit 
to what you can stuff into a byte or int.


But doing all calculations as int instead of byte prevents SIMD 
optimizations.




Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 5 February 2018 at 12:23:58 UTC, psychoticRabbit wrote:
No. C++ is primarliy about higher order abstractions. That's 
why it came about.
Without the need for those higher order abstractions, C is just 
fine - no need for C++


Actually, many programmers switched to C++ in the 90s just to get 
function overloads and inlining without macros.


If you think C is just fine then I'm not sure why you are 
interested in D.


The benefits of C, come from C - and only C (and some good 
compiler writers)


Not sure what you mean by that.

There is little overhead by using C++ over C if you set your 
project up for that. The only benefits with C these days is 
portability.


(depends on what 'many' means)  - There certinaly are 
'numerous' (what ever that means) projects trying to create a 
better c - which contradicts your assertion.


Which ones are you thinking of?



Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 5 February 2018 at 11:25:15 UTC, psychoticRabbit wrote:

C is for trusting the programmer, so that they can do anything.

It's also for keeping things fast - *even if not portable*.


C++ is the same...


Last but not least, C is for keeping things small, and simple.


Yes, it takes less discipline to keep executables small in C, 
thanks to the lack of powerful abstraction mechanisms... Although 
in theory there is no difference.


C does all this really well, and has done so... for a very long 
time.


I don't know how well it does it. When designing for assembly you 
tend to be annoyed by how inefficient C is in translating to 
machine language... But people generally don't do that anymore so 
I guess that perception is lost.


I believe this is why its not so easy to create a 'better' C 
(let alone convince people that there is a need for a better c)


I don't think many want a replacement for C, in the sense that 
the language is very limited.


It is possible to create a much better language for embedde 
programming than C, but the market is not growing, thanks to CPUs 
being much more powerful now, even for embedded.




Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 5 February 2018 at 08:06:16 UTC, Boris-Barboris wrote:
I have no doubt it can be done in the end. I solely imply that 
the disadvantage here is that in C's "main" (imo) use case it 
has to be done, and that is a thing to be concerned about when 
picking a language.


Yes, the wheels are turning. C is for portability and C++ is for 
system level programming.




Re: Quora: Why hasn't D started to replace C++?

2018-02-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 4 February 2018 at 18:00:22 UTC, Russel Winder wrote:
What the D Foundation needs is a CEO who is a good CEO. Good 
CTOs

rarely make good CEO, though it is possible.


Well, I don't think making some hard decisions about memory 
management, which is the most apparent issue with D, is a CTO/CEO 
problem.  It is more an issue of not being willing to make some 
sacrifices in order to gain momentum.


If "better C" is the new strategy, then that is fine, if that 
means you will make "better C" the main focus, going lean.


If it means that you have essentially forked into two main 
directions... then it just makes it even harder to gain momentum.




Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Saturday, 3 February 2018 at 06:15:31 UTC, Joakim wrote:
Software evolves. It isn't designed. The only question is how 
strictly you
_control_ the evolution, and how open you are to external 
sources of

mutations.


Unix was designed... and was based on a more ambitious design 
(Multics).

Linux was a haphazard kitchensink DIY reimplementation of it...

I wouldn't trust Linus much when it comes to general statements 
about software development.


Evolution does not mean that you don't design.

Cars evolve, but they sure are designed.



Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-02 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 2 February 2018 at 15:06:35 UTC, Benny wrote:

75. Go
69. .Net
67. Rust
64. Pascal < This one surprised even me.
63. Crystal
60. D
55. Swift
51. Kotlin


It is interesting that you took the time to score different 
languages, but of course, there probably are a lot languages or 
frameworks that you didn't score.


Anyway, I think in most cases polyglot programmers looking for 
high productivity would pick a language from a very small set of 
parameters, which basically has very little to do with the 
language itself:


What would be the most productive tooling for this very narrow 
problem I am facing? Then you look at tooling that has been used 
for similar problems, and the eco system around that.


Rust, Crystal, Kotlin and Pascal would  typically be very far 
down on that list. The eco system being an issue.


In reality many programming tasks can be solved efficiently with 
"interpreted" languages like Python or the Javascript-ecosystem. 
I.e. you can get good performance and high productivity for many 
applications if you are selective in how you build your 
application. The reason for this? They have been used for many 
different applications, so other people have already done the 
groundwork.




Re: On reddit: Quora: Why hasn't D started to replace C++?

2018-02-01 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 1 February 2018 at 19:10:29 UTC, H. S. Teoh wrote:
somewhat correlate with his 20-25% macros figure. The fact that 
D brings powerful metaprogramming features to the average coder 
in a form that's not intimidating (or less intimidating than, 
say, C++ templates with that horrific syntax), to me, is 
something very significant.


I don't think C++17 template syntax is all that bad, which has a 
lot to do with constexpr. I kinda like it.


But there are still special cases that are difficult to express.



Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d
On Wednesday, 31 January 2018 at 19:00:57 UTC, Jack Stouffer 
wrote:
That's just something that Walter was able to bang out in an 
hour, should have been done years ago, and was excited about.


So it isn't a big deal, but IMO that should be left to an IDE or 
shell.


Back to the argument about cPython vs PyPy/dmd vs ldc. You want 
the reference compiler to be relatively simple and leave the 
unnecessary complications to the production compiler. So if DMD's 
backend was simple and only was a bare minimum implementation of 
the semantics then having multiple compilers would be comparable 
to the Python situation. PyPy is probably way too complicated to 
act as a reference and it would be difficult for Python to move 
if it had been the reference.


Some might say that DMD is too complicated to act as a reference 
as well, and that D has a problem moving because of it. If that 
is true, and I think it is, then the logical conclusion would be 
to make the DMD codebase simpler and leave performance to LDC. 
Like Python does with PyPy.




Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 31 January 2018 at 21:42:47 UTC, Ali wrote:

The kinda small discussion on ycombinator
https://news.ycombinator.com/item?id=16270841


Interesting... most of them don't grok C++, D, Java or Go... Hope 
people don't look to ycombinator for answers.


Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 31 January 2018 at 18:05:30 UTC, jmh530 wrote:
contribute their skills. For instance, Mike Parker's work on 
the D blog has been a great improvement in communication the 
past year or two.


Yep, to have a living blog is very important IMHO.




Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 31 January 2018 at 18:05:30 UTC, jmh530 wrote:
contribute their skills. For instance, Mike Parker's work on 
the D blog has been a great improvement in communication the 
past year or two.


Yep, having a living blog is very important I think.  It is 
always something I look at when visiting project website, just to 
assess where the project is moving.


Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d
On Wednesday, 31 January 2018 at 14:22:03 UTC, Jack Stouffer 
wrote:
It's quite easy to tell when criticism is made in good or bad 
faith


Is it?  Why do so many people have problems with it then? 
Stupidity?


and at this point I'm going to reply to every rant in bad faith 
on here about how terrible D is with "Post issue numbers" and 
nothing else. If you have a legitimate problem, make an issue 
at issues.dlang.org


Ok, and now you are entering a messy space, define "legitimate"?  
I think the most important issue he raised was how project 
management is either under-communicated or conducted.  Either way 
he sends a strong signal that he is one person (of many) that D 
failed to convert even though he was motivated and able. I don't 
think that is his problem... as he has many other options, but it 
most certainly could be an indicator of a project challenge.


By neglecting that you also neglect what could be a source for 
process improvement.  Development processes need continuous 
improvement. They don't happen by themselves, they need attention 
throughout the lifespan of the project. It is a matter of 
priorities, of course, but that is not a question of 
"legitimate", that is a question of "ranking".


For some reason this ranks below colourful error-messages. I 
don't know the reasoning behind that ranking, so I am not going 
to argue whether that is the right priorities, but it _looks_ 
odd, so something is either under-communicated or maybe he was 
right about management related issues. I don't know.


Whatever spot D is in right now in comparison to other projects, 
good or bad, most certainly isn't because of a lack of marketing. 
Marketing would only bring in more demanding users and more such 
not "legitimate" issues would be raised.


People expect less friction today than they did 10 years ago. To 
some extent Microsoft, Google, Jetbrains and others have handed 
out slick freebies and conditioned programmers to be more 
demanding. That is the dynamics of the current "market".




Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d
On Wednesday, 31 January 2018 at 13:54:25 UTC, Jack Stouffer 
wrote:
I've only ever seen people complain about D in this area. Never 
once have I seen someone argue that the existence of PyPy hurts 
Python or gogcc hurts Go.


Well, I've seen that people think that MS C++ is keeping C++ back 
because it failed to keep up, so people couldn't write portable 
code using the latest features...


PyPy and gogcc doesn't hurt Python and Go because the reference 
interpreters/compilers are mature, stable and cannot take more 
manpower...


Apples and oranges, does not compare well.



Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 31 January 2018 at 10:35:06 UTC, Benny wrote:
I told before if there is no leadership to focus resources and 
the main developers are more focused on adding more features to 
the compiler that is already overflowing, then how do you 
expect to see things solved?


Yes, that is probably the main issue. Where D is going with 
memory management is still not clear after...  many years.  It is 
also not clear whether D is trying to carve its own niche or is 
going after C/C++/Rust, different people state very different 
goals.  So it is difficult to imagine where D is heading.  At the 
moment it is basically easier to just stand by and watch while 
using more narrowly focused languages in production.


Let me say this again: I like D but unless you are years into D 
as a developer its just frustrating to deal with the constant 
issues.


So basically the same issues as C++.

Tired of reading the same rhetoric as are the people who think 
posts

like this are troll posts.


People who pretend to be self-confident, but aren't, tend to go 
with that rhetoric. *shrugs*




Re: Quora: Why hasn't D started to replace C++?

2018-01-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 31 January 2018 at 06:27:05 UTC, Paulo Pinto wrote:
And lets not forget Arduino, ESP286 and ESP32 are making 
wonders for the kids to jump into C++ as their first language.


That's interesting, MS got a lot of traction for BASIC by making 
it available in ROM on basically (no pun intended) all 8-bit 
computers in the 80s. To most young programmers there was no 
other language than BASIC (and machine language).


Re: Quora: Why hasn't D started to replace C++?

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 21:49:39 UTC, H. S. Teoh wrote:
"extremely eefficient native code".  I don't argue that C++ has 
extremely efficient native code. But so has D. So the claim 
that C++ has an "enormous performance advantage" over D is 
specious.


We also need to keep in mind that for a small segment of C++ 
programmers it is important to be able to use CPU/SoC/hardware 
vendor backed compilers so that they can ship optimized code the 
day a new CPU is available. So there is a distinct advantage 
there for people who don't aim for consumer CPUs.


Most programmers don't care as much, since adoption of new CPUs 
is slow enough for GCC/Clang to catch up in time.


Anyway, as C++ is taking more and more of C's niche, this issue 
can be more an more "threatening". E.g. hardware vendors that now 
only ship C compilers might in the future only  ship C++ 
compilers... I don't know exactly where this is going, but it is 
possible that C++ could become hard to displace for hardware 
oriented programming. Seems like more an more embedded 
programming is moving to C++ from C.


Re: Quora: Why hasn't D started to replace C++?

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 21:49:39 UTC, H. S. Teoh wrote:
Meaning, the "enormous performance advantage" is because of 
"extremely eefficient native code".  I don't argue that C++ has 
extremely efficient native code. But so has D. So the claim 
that C++ has an "enormous performance advantage" over D is 
specious.


Well, it isn't relevant for those people who would adopt D anyway.

Of course, C++ and Java have some advantages by being so large 
that there is a market for commercial specialty solutions and 
services... Although most C++ and Java programmers use tooling 
that is essentially free (well except perhaps the IDE), so for 
most of them it won't matter.


Even when smaller languages try to implement such 
tooling/features there isn't a large enough user base to harness 
the implementation (since even for big languages the actual user 
base for those features are low), so it is very hard for smaller 
languages to branch into those special niches unless the whole 
language feature set is geared towards a specific niche... but 
that harms adoption too...




Re: Quora: Why hasn't D started to replace C++?

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 21:43:45 UTC, Daniel Kozak wrote:
Thats not completly true, last time I tried some of best c++ 
version from http://benchmarksgame.alioth.debian.org/. I was 
able to write much idiomatic D code which was faster than c++ 
witch use some specific libraries and so on. So my experience 
is that D is same or faster than C++


I'm not thinking about those. I'm thinking about supported 
tooling that makes you more productive when writing code with 
high throughput.




Re: Quora: Why hasn't D started to replace C++?

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 21:30:06 UTC, H. S. Teoh wrote:
On Tue, Jan 30, 2018 at 09:26:30PM +, Ola Fosheim Grøstad 
via Digitalmars-d wrote:
Specific examples, please. What are some examples of these 
"performance related options"?


Supported concurrency options and tuning etc.





Re: Quora: Why hasn't D started to replace C++?

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 21:02:13 UTC, H. S. Teoh wrote:
On Tue, Jan 30, 2018 at 03:45:44PM -0500, Andrei Alexandrescu 
via Digitalmars-d wrote:

https://www.quora.com/Why-hasnt-D-started-to-replace-C++

[...]

I actually agree with all of his points, except one: C++'s 
"enormous performance advantage"?!  Is he being serious?  Or is 
his view biased by dmd's, erm, shall we say, "suboptimal" 
optimizer?


Well, I understand what you mean, but there are more performance 
related options for C++ than for any other language at the 
moment. I don't use them, most people don't, but they exist. So 
for people that care a lot about throughput there basically is no 
other "complete" alternative...




Re: How programmers transition between languages

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 19:19:39 UTC, Laeeth Isharc wrote:
Every language is based on different principles.  The way D 
will be adopted is via people who are principals giving it a 
try because it solves their problems.


Not sure what you mean by principles, Algol languages (the class 
of languages D belongs to) tend be rather similar as far as 
principles for computation goes.


At the early stage adoption is rarely driven by management. 
Management tend to go with major players. In order to go with 
smaller players you need to appeal to engineering-aesthetics, not 
management constraints.


Both Rust and Go has gotten adoption along the 
engineering-aesthetics dimension. Very few management related 
advantages with those languages at the stage where D is at. 
Docker changed that a bit for Go, but that was after it had a 
fair following. Anyway, it also confirms the "you need a major 
application" assumption.


Although Python didn't have a major application when it was 
adopted. It was just better than Perl,  Php and Bash.




Re: How programmers transition between languages

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 29 January 2018 at 10:12:04 UTC, Russel Winder wrote:
On Mon, 2018-01-29 at 03:22 +, Ola Fosheim Grøstad via 
Digitalmars-

d wrote:
[…]
I guess some go to Rust after working with Go, but the 
transition matrix linked above suggests that the trend has 
been that people give up on Rust and try out Go then Python... 
Of course, with so little data things are uncertain and can 
change.


I think this is an important point. There was some interesting 
data found, some hypotheses possible, but without further 
experimentation and/or formal use of Grounded Theory 
exploration, all commentary is speculation and opinion.


Grounded Theory cannot be used for trend analysis though. 
Certainly you could use quantitative data for sampling some users 
(using some kind of stratification) and interview them and then 
use GT to try to capture themes in the psychology of people who 
move from one language to another. So yes, more interesting, but 
you wouldn't learn more about the actual trend.


It is like the difference between knowing why waves build up in 
the ocean (qualitative) and knowing how many large waves we have 
and where they tend to occur (quantitative). Both are interesting.





Re: How programmers transition between languages

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 30 January 2018 at 12:35:21 UTC, jmh530 wrote:
Sure, but I don't think there are enough D github-repositories 
to get decent quantitative analysis... Maybe a qualitative 
analysis.


Small sample size problem makes me think of Bayesian 
analysis...though I suppose there's a bigger problem if D 
doesn't have enough github repositories.


Yes, I think you need quite a lot of users to get enough github 
repositories. Not sure what the percentage would be, maybe 1 out 
50 users or less are active on github?


Of course, the Rust analysis might be a bit off too, because it 
is common for enthusiasts to try out and leave young languages. 
There might be a significant difference between hobby and 
professional users too.


Anyway, I think the "pagerank" transition analysis is 
interesting. It provides some insights into dynamics, and also 
which transitions don't happen give us something to think about.






Re: How programmers transition between languages

2018-01-30 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 29 January 2018 at 22:28:35 UTC, Michael wrote:
I would hazard a guess that Go is likely the language they 
settle on for whatever task required something more low-level 
like Rust/Go(/D? =[ ) and that they move to Python for the 
kinds of scripting tasks that follow development of something 
in the previous languages.


That could be, many have Python as a secondary high level 
language.


Re: How programmers transition between languages

2018-01-29 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 29 January 2018 at 05:21:14 UTC, jmh530 wrote:
I don't deny that there are limitations to the data. At best, 
it would be telling you the transition of github users over a 
specific period.


Sure, but I don't think there are enough D github-repositories to 
get decent quantitative analysis... Maybe a qualitative analysis.







Re: How programmers transition between languages

2018-01-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 28 January 2018 at 23:09:00 UTC, Michael wrote:
by the whole target audience. Rust, on the other hand, seems to 
be picking up those who have left Go.


I guess some go to Rust after working with Go, but the transition 
matrix linked above suggests that the trend has been that people 
give up on Rust and try out Go then Python... Of course, with so 
little data things are uncertain and can change.




Re: How programmers transition between languages

2018-01-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 28 January 2018 at 13:50:03 UTC, Michael wrote:
I find it fascinating that C# is in the "languages to avoid" 
section, because from my perspective it's receiving more and 
more adoption as the modern alternative to Java, in a way that 
Go and Rust are not. Different markets and all of that. So I 
can't see why C# would be seen as a language that is dropping 
in popularity (though I don't use it myself).


I don't think the data suggests that? A small dip, perhaps, as it 
is less relevant outside Microsoft desktop. But the transition 
matrix suggests that people move a bit between Java and C# and 
that people move from Pascal and Visual Basic to C#.


I suspect some projects move from C# to TypeScript, though.

I do worry that, having been using D for about 3 1/2 years now, 
that the perceptions of D outside of this community don't seem 
to be changing much.


But D isn't changing either...



Re: How programmers transition between languages

2018-01-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 28 January 2018 at 00:31:18 UTC, rjframe wrote:

On Sat, 27 Jan 2018 22:59:17 +, Ola Fosheim Grostad wrote:


On Saturday, 27 January 2018 at 13:56:35 UTC, rjframe wrote:
If you use an IDE or analysis/lint tool, you'll get type 
checking. The interpreter will happily ignore those 
annotations.


You need to use a type checker to get type checking... No 
surprise there, but without standard type annotations the type 
checker isn't all that useful.  Only in past few years have 
typing stubs become available for libraries, and that makes a 
difference,


My point is that the interpreter ignores information that I 
give it, when that information clearly proves that I have a 
bug. Python 3.6 gives you an explicit type system when you 
want/need it, but stops just short of making it actually useful 
without secondary tools.


The reference interpreter doesn't make much use of static type 
information. I think it makes sense to have separate type 
checkers until this new aspect of Python has reached maturity. 
That doesn't prevent third parties to implement interpreters that 
makes use of type information.


Professionals won't shy away from using additional tools anyway, 
so the only reason to build it into the interpreter is for 
optimization at this point.




Re: How programmers transition between languages

2018-01-26 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 26 January 2018 at 17:24:54 UTC, Benny wrote:
What i found interesting is the comparison between the "newer" 
languages and D ( see the reddit thread ).


9   Go  4.1022
15  Kotlin  1.2798
18  Rust0.7317
35  Julia   0.0900
46  Vala0.0665
50  Crystal 0.0498
53  D   0.047%

While i can understand Rust ( Mozilla ), Kotlin ( Jetbrain ), 
Go ( Google ).

Even Vala and Crystal are ranked higher then D.


Yes, those stats are interesting too, but Go seems to do much 
better than Rust. And if the trend is that people move from  Rust 
to Go and from Go to Python it might mean that people might start 
out trying out new languages with performance goals in mind, but 
eventually go for productivity when they realize that they pay a 
fairly high price for those performance gains? Anyway, with 
Python 3.6 you get fairly good type annotation capabilities which 
allows static type checking that  is closing on what you get with 
statically typed languages.  Maybe that is a factor too.





How programmers transition between languages

2018-01-26 Thread Ola Fosheim Grøstad via Digitalmars-d
While this analysis of language popularity on Github is 
enlightening:


http://www.benfrederickson.com/ranking-programming-languages-by-github-users/

I found the older analysis of how programmers transition (or 
adopt new languages) more interesting:


https://blog.sourced.tech/post/language_migrations/

Like how people move from Rust to Go. And from Go to Python:

https://blog.sourced.tech/post/language_migrations/sum_matrix_22lang_eig.svg


Also the growth of Java is larger than I would anticipate:

https://blog.sourced.tech/post/language_migrations/eigenvect_stack_22lang.png

Granted, Java has gotten quite a few convenience features over 
the years.




Re: Tuple DIP

2018-01-17 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 17 January 2018 at 19:43:03 UTC, Manu wrote:
I quite like C++ explicit unpacking (using ...), and I wouldn't 
be upset to

see that appear here too.


FWIW C++17 has structured binding:
http://en.cppreference.com/w/cpp/language/structured_binding

auto [x,y] = f()





Re: [theory] What is a type?

2018-01-16 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 16 January 2018 at 06:26:34 UTC, thedeemon wrote:
"Practical Foundations for Programming Languages" by Robert 
Harper


Well, I have that one, and the title is rather misleading. Not at 
all practical.


electronic computers and it's still very relevant today. Anyone 
dabbling into compilers and programming language theory should 
learn the basics of type theory, proof theory and some category 
theory, these three are very much connected and talk about 
basically the same constructions from different angles (see 
Curry-Howard correspondence and "computational 
trinitarianism"). It's ridiculous how many programmers only 
learn about types from books on C++ or MSDN, get very vague 
ideas about them and never learn any actual PLT. Of course type 
is not a set of values, or any other set, not at all.


I don't really agree with this, empirically speaking. As many 
people have designed good languages without such a focus, and 
many have designed not very good languages with this focus...


Anyway, let's not make this too complicated, no need for a pile 
of tomes:


https://en.wikipedia.org/wiki/Type_theory




Re: [theory] What is a type?

2018-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 15 January 2018 at 19:25:16 UTC, H. S. Teoh wrote:
At the most abstract level, a type is just a set. The members 
of the set represent what values that type can have.


Hm, yes, like representing an ordered pair (a,b) as {{a},{a,b}}.

But I think typing is more commonly viewed as a filter. So if you 
represent all values as sets, then the type would be a filter 
preventing certain combinations.


It is a matter of perspective, constructive or not constructive. 
Kinda like synthesis, additive (combine sines) or subtractive 
(filter noise).




Re: How do you use D?

2018-01-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 5 January 2018 at 11:49:44 UTC, Joakim wrote:
Yes, but when I pointed out that it's fine to think you're the 
best as long as you stay focused on bettering the flaws you 
still have,


I don't think that thinking you're the best brings anything but 
disadvantages, actually… Except when you are advertising perhaps.



What would be better, a million JS programmers with 10k great 
ones who "grow your infrastructure," or 150k D programmers with 
30k great ones doing the same?  Holding everything else 
equivalent proportionally, I'd say the latter.


Well, that is not the scale we are talking about here, but 
actually the former is the better if it means that you get twice 
as many that are paid to grow the eco system full time.  If you 
compare JavaScript with D on that metric you are closer to a 
1000:1 ratio or more in JavaScript's favour…  Not a fair 
comparison, but that's the reality that drives adoption.


When you reach critical mass within a specific domain a lot more 
people will be doing full-time eco system development… (Facebook, 
Google, Microsoft + a very large number of smaller companies).


The browser domain is very large, so not a fair comparison, of 
course.



speeding up a fundamentally slow and inefficient language 
design, which a core team of great programmers wouldn't have 
put out there in the first place. :P


Doesn't really matter in most cases. The proper metric is 
"sufficient for that task at hand". So, if many people are paid 
to do full-time eco system development that also means that the 
tool is sufficient for a large number of use cases…


You can do the same in browsers as people do with Python. Use 
something else for those parts where speed is paramount: stream 
from a server, use WebGL or WebAssembly, or use the browser 
engine cleverly. For instance I've implemented instant text 
search using CSS for even IE9, the browser engines were tuned for 
it, so it was fast.


People use the same kind of thinking with C++/D/Rust as well, i.e 
use the GPU when the CPU is too slow, or use a cluster, or use a 
database lookup…


Bother computer hardware and the very capable internet 
connections people have (at least in the west) are changing the 
equations.


It is easier to think about a single entity like a programming 
language with a small set of isolated great programmers writing 
an application that will run on an isolated CPU, but the world is 
much more complicated now than it used to be in terms of options 
and the environment for computing.



of programming.  As for "full stack," a meaningless term which 
I bet actually has less CS bachelors than the percentage I 
gave. ;)


I understand "full stack" to mean that you can quickly adapt to 
doing  database, server, client and GUI development.


Saying that most C++ programmers also use python implies that 
having two tools that you choose from is enough.  In that case, 
you're basically agreeing with me, despite your previous 
statements otherwise, as I was saying we can't expect most 
programmers to learn more than one tool, ie a single language.


No.  The programming style for Python is very different from C++. 
 Just because many C++ programmers also know Python, doesn't mean 
that they don't benefit from also knowing other languages. I bet 
many also know JavaScript and basic Lisp.


Could be, but that changes nothing about the reality that most 
programmers are just going to pick one language and some set of 
frameworks that are commonly used with it.


Ok. I personally don't find the standard libraries for Java and 
C# difficult to pick up as I go.   Google gives very good hits on 
those.


Application frameworks are more tedious, but they are also more 
quickly outdated. So you might have to learn or relearn one for a 
new project anyway.


That is the current reality, it doesn't matter what 
hypothetical kind of development you have in mind.


That's not an argument… That is just a unfounded claim.

D users are the exception that prove the rule, the much larger 
majority not using D because they're already entrenched in 
their language.


I don't really think D users are as exceptional as they often 
claim in these forums…


when you _couldn't_ to entrench themselves in that market.  And 
as I've pointed out to you before, they're still much more 
efficient, so until battery tech get much better, you're going 
to want to stick with those natively-compiled languages.


Not convinced. I think most people on smartphones spend a lot of 
time accessing browser-driven displays. Either in the actual 
browser or as a widget in an app…


It doesn't really matter, because the dominating activity isn't 
doing heavy things like theorem proving or data-mining…


As long as the code for display rendering is efficient, but that 
is mostly GPU.


Which of those python-glued GUI apps has become popular?  That 
was the question: I can find toy apps in any language, that 
nobody uses.


That's not right. Scripting is co

Re: What don't you switch to GitHub issues

2018-01-05 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 4 January 2018 at 12:11:36 UTC, rjframe wrote:

Python has more than 6000, 2000+ with patches[2].


Most appears to be library related or improvement requests…

I don't think numbers is the right metric.

Tools with as many users as Python will get a lot of issues 
reported irrespective of severity or frequency.




Re: Old Quora post: D vs Go vs Rust by Andrei Alexandrescu

2018-01-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 4 January 2018 at 20:25:17 UTC, jmh530 wrote:
That probably makes Pony easier to compare to D. I was just 
noting that Rust shares some ownership stuff with Pony.


I get your point. Pony is probably closer to Go and Erlang, but 
nevertheless comparable to what some want from D based on what 
they say in the forums at least.


I suppose I'm curious what is the bare minimum that needs to 
get added to D to enjoy the benefits of an ownership system 
(and it seemed like something like the iso type was most 
important).


There should at least be a strategy for having transitions 
between pointer types and tracking of pointer types in relation 
to compile time code gen and run-time behaviour.  So knowing that 
a pointer is iso can allow many things like transitioning into 
immutable, automatic deallocation, reusing memory without 
realloction, moving heap allocations to the stack…


But if D is going to have a GC and no special code-gen to back it 
up then it becomes important to stratify/segment the memory into 
regions where you know that pointers that cross boundaries are 
limited to something known, so that you can scan less during 
collection. Which is rather advanced, in the general case, IMO. 
But a possibility if you impose some restrictions/idioms on the 
programmer.




Re: How do you use D?

2018-01-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 4 January 2018 at 13:07:37 UTC, Mengu wrote:
there's also one other thing: atom, vs code, spotify, slack are 
all running on electron. does it make it a better platform than 
python?


I found this example of using electron with Python:
https://github.com/keybraker/electron-GUI-for-python

Could probably use the same model for any language.




Re: Maybe D is right about GC after all !

2018-01-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 4 January 2018 at 10:49:49 UTC, Dgame wrote:
On Thursday, 4 January 2018 at 10:40:33 UTC, Ola Fosheim 
Grøstad wrote:
However, Rust won't fare well in a head-to-head comparison 
either, because of the issues with back-pointers.


Could you explain this?


You often want back-pointers for performance reasons, but Rust 
default pointer model isn't as accepting to cycles. So you either 
have to change your design or loose the benefits that the 
compiler provides, at which point C++ is more convenient, but it 
depends on what you try to do.


Re: How do you use D?

2018-01-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 20:11:03 UTC, Pjotr Prins wrote:
I am thinking of Python here. I have to work in Python and 
there is no love lost.


If I write conservative Python code then I think PyCharm 
community edition is getting me closer to static typing, but 
providing type info as comments when deduction fails is quirky.


small group of people. I am partial to organizing a Guile/Guix 
day. Feel free to join in! 
https://libreplanet.org/wiki/Group:Guix/FOSDEM2018.


Sounds like a fun event!  (but I am nowhere near Belgium at that 
point in time).




Re: Maybe D is right about GC after all !

2018-01-04 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 4 January 2018 at 10:18:29 UTC, Dan Partelly wrote:
Rust has a OS being written right now. Does D has ? Anyone ever 
wanted to use D to write a OS kernel, I doubt it.


Yes, a group started on it,  but I don't think it reached 
completion. Anyway, you see traces of interest around the 
internet, e.g. http://wiki.osdev.org/D_Bare_Bones


But the argument is fair, as long as Walter and Andrei are trying 
to position D as an alternative to C and C++. Then D should be 
evaluated on those terms.


However, Rust won't fare well in a head-to-head comparison 
either, because of the issues with back-pointers.




Re: Maybe D is right about GC after all !

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 22:37:54 UTC, Tony wrote:
Why would they choose D for low level programming when they 
knew before they chose it that it had a Garbage Collector?


Because it was/is a work-in-progress when they first got 
interested in it, and it was also advertised as a replacement for 
C and C++.  It was also assumed that one could expect significant 
improvements on the GC as the language matured. But of course, 
computers also get more and more memory to scan…



who do. But maybe that is the case, the people who complain 
about the Garbage Collector in this D forum are not using D.


Both. Some use D, but not the GC after initialization. Some 
wanted to use D (in production), but has put that on hold and are 
just waiting to see where it is heading. Some gave up on D for 
low level programming and turned to other languages.


So yes and no. ;-)



Re: Old Quora post: D vs Go vs Rust by Andrei Alexandrescu

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 22:06:22 UTC, Mark wrote:
I don't know much about GCs, but can you explain why that would 
be necessary?


Necessary is perhaps a strong word, but since D will never put 
restrictions on pointers then you need something else than what 
Java/C#/JavaScript/Go is using.


There are many ways to do memory management, but if you want 
"smart" memory management where the programmer is relieved from 
the burden of manually making sure that things work then the 
compiler should also be able to reason about intent of the 
programmer and you need some way to express that intent.


E.g. Pony differentiate between different types of "ownership" 
and can transition between them, so you can go from memory that 
is isolated to a single pointer, pointers that only know the 
address but cannot access the content, pointers that are fully 
shared etc.


You also have something called effect-systems, which basically a 
sort of type system that can statically track that a file is 
opened before it is closed etc.


So, if you have a smart compiler that is supposed to do the hard 
work for you, you probably also want some way to ensure that it 
actually is doing the smart things and that you haven't 
accidentally written some code that will prevent the compiler 
from generating smart code.


For instance if you have a local garbage collector for a graph, 
you might want to statically ensure that there are noe external 
pointers into the graph when you call the collection process, 
although you might want to allow such pointers between 
collections. That way you don't have to scan more than graph 
itself, otherwise you would have to scan all memory that could 
contain pointers into the graph...





Re: Old Quora post: D vs Go vs Rust by Andrei Alexandrescu

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 19:42:32 UTC, Ali wrote:
That was not a comment about GC implementations in general, it 
was about D's GC implementation, check the original post on 
Quora, and read the statement right before this one, maybe it 
will make this point clearer


I read it when it was posted… D does get the key benefits of a GC 
(catching reference cycles and convenient memory management), but 
it also gets the disadvantages of a 1960s GC implementation.


There are many ways to fix this, which has been discussed to 
death before (like thread local garbage collection), but there is 
no real decision making going on to deal with it.  Because to 
deal with it you should also consider semantic/types system 
changes.


The chosen direction seems to be to slowly make everything 
reference counted instead, which doesn't really improve things a 
whole lot IMO. At best you'll end up where Swift is at. Not bad, 
but also not great.


And it raises many concerns I see in many post by people who 
are a lot less involved with D, like the issues with the GC and 
the illusive vision


Well, but leadership is about addressing the concerns and making 
some (unpopular) decisions to get here.  *shrug*


But I don't really care anymore, to be honest… Several other 
compiled languages are exploring new directions, so I'm currently 
content to watch where they are going…


I personally think D would be in a stronger position if some hard 
decisions were made, but that isn't going to happen anytime soon. 
So there we are.





Re: Maybe D is right about GC after all !

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 19:42:28 UTC, Tony wrote:
Why would someone choose to use a language with a Garbage 
Collector and then complain that the language has a Garbage 
Collector?


People always complain about garbage collectors that freeze up 
the process. Irrespective of the language. It's the antithesis of 
low level programming…





Re: How do you use D?

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 12:15:13 UTC, Pjotr Prins wrote:
they come if they need it. I remember a Google engineer telling 
me that he was tired of people bringing up D every time. That 
was 10 years ago. D has had every chance to become a hype ;)


There was a lot of hype around D about 10 years ago,  because of 
slashdot?  (slashdot doesn't seem to work as a hub in that way 
anymore?) Geeky people had even heard about it in job interviews… 
But the compiler was a bit too buggy for production use and the 
GC was… a problem for what it was position itself as: a 
replacement for C++.


There are other languages that also position themselves as C++ 
replacements, such as http://loci-lang.org/  , but very few have 
heard of those. So yeah D has enjoyed more hype than most 
languages in this domain, actually.



why. What is it that makes a hyped language?


Well, depends on the hype, I guess.  But you probably need some 
kind of "prophetic" message that will "remove all pain". Since 
C++98 was not painless, there was a market for a "prophetic 
message". Much less so now. Also you need a "tower for 
announcing" like Slashdot. I don't think reddit is anywhere near 
as effective as Slashdot used to be. Too fragmented.


Rust received hype because it would make writing fast programs 
"painless", just wait, it isn't quite ready yet, but we'll get 
there. So they hype prophecies were there before Rust was 
actually useful.


I don't think Go was all that hyped up. It received a lot of 
attention at first release, but was underwhelming in terms of 
features. But it received a lot of attention when being used for 
containers, I believe. So more a niche utility marketing effect 
in terms of buzz.


So hype seems to come with a language being used for some new way 
of doing something (even though the language might not be 
significant in that regard, e.g. Go). Or the hype seems to come 
before the product is actually useful, not quite like a pyramid 
scheme, but close… Oh yeah, bitcoin too. Prophetic, but not 
particularly useful… yet, but just wait and see.


So I guess hype comes from:

1. People having an emotional desire to be free from something.

2. A tech delivering some kind of prophetic message one can 
imagine will provide some kind of catharsis if one just believe 
in the outcome.


Then you have this all the psychological effect that if people 
have invested significant time, resources and/or emotion into 
something then they will defend it and refuse to see flaws even 
when faced with massive evidence against it. So the hype will be 
sustained by a vocal set of believers if you have reached  a 
large enough audience with your messaging before the product is 
launched…?


Then it tapers off in a long tail… or the believers will make it 
work somehow, at least good enough to not be a lot worse than the 
alternatives.


docs. I just disagree with the aim of trying to make D a hyped 
language.


Yes, that is a bit late, I think. You would have to launch D3 or 
something to get a hype effect (at least in the west).


A language like GNU Guile has only a few developers - and they 
do great work.


But is Guile used much outside GNU affiliated projects?




Re: How do you use D?

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 16:20:48 UTC, Joakim wrote:
These languages may all have these problems, but I don't see 
the connection to your original point about it not being good 
to think you're the best.


Hm?  I tried to say that it is not good to think that you have 
the best programmers. Or that you are satisfied with only 
appealing to the best programmers.


There are some assumptions about "best" in that attitude that 
isn't healthy for developing an ecosystem.  First of all, it 
might not be true, maybe the best programmers go elsewhere.  
Second of all, programmers generally don't start out as "best".  
And only appealing to experience programmers is not a good long 
term strategy.


Anyway, you probably have a lot more great programmers using 
Javascript than any small language, measured in absolute numbers. 
And when growing a eco system it is those absolute numbers that 
matters. It doesn't matter if 99% are not good if the other 1% 
can grow your infrastructure. And well, right now that means 
developers can be very productive in that eco system (e.g. 
TypeScript, Angular, React etc).



More programmers don't have a bachelors in CS than those who 
do.  I think you'll find the percentage who regularly use 
multiple languages is fairly low, let alone evaluating each one 
for each job.


I think it depends on what type of developement we are talking 
about. But I see a lot of fuzz about fullstack developers. Not 
sure if that fuzz translates into something at scale, but 
"full-stack" is at least something that is mentioned frequently.



Anyway, it is my impression that many C/C++ programmers also 
know Python and also have a good understanding of basic 
functional programming.


So two tools in the toolbox is enough?


Not sure what you mean by that. I meant to say that my impression 
is that most C/C++ programmers need a higher level language and 
also are capable of using other paradigms than what C/C++ 
encourages. Despite C++ being able to express a lot of the same 
things and also despite C++ being a language where you never 
really reach a full deep understanding of the language.


Your company is much more likely to bring in somebody with Java 
experience to write the Android portion.


Mostly because of the iOS/Android frameworks, not really because 
of the languages.


Or rather, the Java language shouldn't be the stumbling block, 
but both of iOS and Android are rather bloated frameworks that 
takes time getting into. Which I think it is intentional. Both 
Apple and Google want developers to be dedicated experts on their 
platform and seems to deliberately go for changes that prevent 
cross platform tooling.


It is unrealistic to expect the vast majority of programmers to 
do any more than use one language most of the time.


I don't know. Depends on what kind of development we are talking 
about.


This actually argues for a single language: you're going to 
spend a ton of time tracking all those shifting frameworks so 
you won't have time for a new language too, but at least you 
won't have to learn a new language every time you pick up the 
new js framework of the day.


I think there is another way, but it isn't available yet.

Program transliteration and software synthesis should over time 
be able to offset some of these issues.


Most imperative languages are fairly similar.  Most of the code 
we write, whether it is in Python, C+ or D would have a fairly 
similar structure.  In some sections you get something very 
different, but I don't think that is dominating.


(Functional and logic programming tend to lead to more different 
patterns though.)


The ongoing churn may help new languages with new users and 
companies, don't think it helps with users who already know a 
language well.


I don't know. Seems that many in these forums have done work in 
Java, Go, Python, Rust and C++.


Seems like an oxymoron, tech changes so fast that I don't see 
how anything could be timeless, even if you really mean "will 
still be the same 20 years from now." ;)


Right, but digital computing is a relatively new invention (and 
well, even electricity is relatively new ;-).  Over time one 
would think that something "timeless" would establish itself. But 
for now we can just say "stable".


There clearly are some frameworks that have been relatively 
stable. Like matlab. I think that is because matlab has tried to 
closely model the domain that scientists have been trained in 
(linear algebra). But it is interesting that Python is making 
inroads there. That matlab is now perceived as not having good 
enough abstraction mechanisms, perhaps.


On the other hand, the mathematical field of logic is also 
relatively new as it is conceptualized today, and some people are 
exploring paradigms such as probabilistic programming languages 
and what not… So right, "timeless" is not realistic at the moment 
as the basic foundation is still quite new and we are still 
learning how to make comput

Re: How do you use D?

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 11:13:04 UTC, Joakim wrote:
Not necessarily, it all depends if thinking you're the best 
leads to taking your eye off the ball of improving and 
acknowledging your problems, which I see little indication of 
here.


Well, if a language is used for a narrow set of application areas 
then one might be able to have a deep understanding of what the 
challenges are.


In general I think it is difficult to understand what challenges 
people have in areas one have little familiarity with. I've seen 
this time and time again in discussions about memory management.


I think both Rust and D suffers a bit from this.  And likewise, I 
think C++ designers are making a mistake by presuming that C++ 
can become a competitive high level language.


So it is a general challenge in language design, I think.

Agreed with much of this: they may be better at various things, 
but those things don't matter much of the time.


Yes, I don't think any programmers are "best", some people are 
not at all suited for programming or lack some general knowledge, 
but otherwise I think we only have good programmers that are 
"best" in some very narrow domains.


And who do you know who does this?  While I myself have 
espoused the idea of the best tool for the job in this forum, 
realistically, other than a small handful of coders, when does 
that ever happen?


I don't know how many…  But if you have a good understanding of a 
language like Java and C/machine language, and also have the 
equivalent of a bachelor in comp sci, then I think you should be 
able to pick up just about any language in relatively short time 
(except C++ where the know-how is less accessible for various 
reasons).


Anyway, it is my impression that many C/C++ programmers also know 
Python and also have a good understanding of basic functional 
programming.


I recently read some article linked off proggit that stated the 
reality much better: programmers learn a language that suits 
them, then use it everywhere they can.


Well. Take an iOS programmer. She/he would start out with 
Objective-C, which requires understanding of C. Then maybe you 
also need to interface with C++ in order to use some library. 
Then you switch over to Swift… Then you need to port the app to 
Android and have to pick up some Java. Then somebody want to turn 
it into a webapp and you pick up TypeScript… Then someone needs 
interfacing with C# for a web-service and you pick up a bit of 
that…


As long as you are interfacing with other systems you often 
achieve more by learning enough to use something written in 
another language than by picking a suboptimal solution that use 
your favourite language.


But sure, if I need to hack together something I would usually 
think about using Python first, mostly because it is so mallable 
for smaller tasks. Not really because it is my preferred 
language. I don't really like dynamic typing in principle, but it 
is productive for smaller tasks and prototyping.


Given how much effort it takes to know a language platform 
well, not only the language itself but all the libraries and 
their bottlenecks, and most programmers' limited motivation to 
pick up new tech, that is the only approach that makes sense 
for the vast majority.


Well, but libraries and frameworks are unfortunately not 
permanent. They are shifting relatively fast. Heck, even the Java 
standard library contains many things that you don't want to use, 
because there is something better in there.


So if you start on a new project there is often a more productive 
framework you could go with.


And that might require adopting a new language as well.

This is of course good for new languages. If this was not the 
case then it would be near impossible for new languages to 
establish themselves.


But it can also be taken as a sign that we don't have the 
languages and tooling that enable writing really good timeless 
libraries and frameworks.


Maybe that will change in the future, I guess we now are reaching 
a point where neither CPU or memory are the limiting factor for 
most mundane applications. So maybe that will cause a more stable 
infrastructure to emerge… but probably not in our lifetime. I 
think we still have a long way to go in even simple areas such as 
GUI frameworks.



Yes, but if you're only hyped because you were stowed away on a 
popular platform, ie javascript in the browser, or are easy to 
learn, Go from what I hear, then the technical limitations of 
the language put a ceiling on how high you can go.  You'll get 
to that ceiling faster, but then you're stuck there.


I perceived that there was a lot of hype around Python 15 years 
ago or so. Now, universities are replacing Java with Python as 
the introduction language and Python is also becoming the defacto 
language for scientific programming. Python is basically getting 
critical mass and is now managing to take on Matlab and perhaps  
to some extent even C++/Fortran.


The

Re: How do you use D?

2018-01-03 Thread Ola Fosheim Grøstad via Digitalmars-d

On Wednesday, 3 January 2018 at 09:56:48 UTC, Pjotr Prins wrote:
average ones. And D must be there. Similar to the Haskell and 
Lisp communities we have the luxury of dealing with the best 
programmers out there.


This attitude is toxic, and it isn't true either.  Sure, Haskell 
might attract programmers who are more interested in math, but in 
practical programming formal math is only 1% of what you need 
(99% of your time is not spent on things that require deep 
understanding of math).   I don't see any evidence of Lisp 
programmers being better than other programmers either.


Good programmers aren't stuck on any single language and will 
pick the tool best suited for the job at hand.  Good programmers 
are also good at picking up new languages.




Hyped languages are for suckers.


Hype leads to critical mass, which leads to higher productivity 
because you get better tooling, better documentation (including 
stack overflow), better libraries and better portability.


But you don't need lots of users to get hype.  You can focus on a 
narrow domain and be the "hyped language" within a single realm.



The only time where it is an advantage to be small is when your 
language design is changing. Once the language design is stable 
there is only disadvantages in not having critical mass.




Re: Documentation of object.destroy

2018-01-02 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 2 January 2018 at 20:07:11 UTC, jmh530 wrote:
What's the monitor do? I see that in the ABI documentation, but 
it doesn't really explain it...


A monitor queues/schedules processes that are calling methods of 
an object so that only one process is executing methods on a 
single object at any given point in time.


https://en.wikipedia.org/wiki/Monitor_(synchronization)



Re: Old Quora post: D vs Go vs Rust by Andrei Alexandrescu

2018-01-02 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 2 January 2018 at 16:34:25 UTC, Ali wrote:
"Overall it could be said that D has the downsides of GC but 
doesn't enjoy its benefits."


Not a true statement. Go does not have long pauses, but it has 
slightly slower code and a penalty for interfacing with C.  Go 
also has finalization with dependencies, so that they execute in 
the right order…


The downsides/upsides of GC isn't something fixed for all designs.

This post on Quora, makes it fully clear to me that the makers 
of D are fully aware of D's position


Not sure how you reached that conclusion.




Re: Maybe D is right about GC after all !

2018-01-02 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 2 January 2018 at 04:43:42 UTC, codephantom wrote:
Well, consider the silent 'minority' too, who still think that 
increasing performance, and reducing demands on resources, 
still matter, a lot, and that we shouldn't just surrender this 
just to make programmers more 'productive' (i.e so they can 
ship slower GC code, more quickly).


I think most of the people in this minority (which actually I 
think was a majority a few years back) has given up on D as a 
production language.  I am certainly in that group.  It is 
starting to be a bit late to change direction now IMO. I mean, it 
is still possible, but that would require a mentality shift, 
which has been surprisingly difficult to establish.


Given the increased availability of memory in computers I think 
an application language with built in compiler-supported arena 
allocator will be a big win, but the only mainstream language 
that is going for this seems to be Golang. (Go is an application 
language, not a systems language).



What it really comes down to though, is language designers 
ensuring that any language that defines itself as a 'modern 
systems programming language', gives control 'to the 
programmer', and not the other way around.


Right now, I think only C++ and Rust fits the "modern system 
programming" description… GC and refcounting is for application 
level programming, so it shouldn't even be on the table as a core 
solution for that domain.


But D seems to be content with application level programming and 
that's ok too, but a bit misleading if you also say that you aim 
to be the best language for low level programming… I don't really 
think it is possible to be good at both without a massive budget.


You have to pick what you want to be good at. And that is the 
main problem with the evolution of D; a lack of commitment to a 
specific niche.




Re: Developing blockchain software with D, not C++

2018-01-01 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 1 January 2018 at 13:34:39 UTC, aberba wrote:
On Monday, 1 January 2018 at 12:24:29 UTC, Ola Fosheim Grøstad 
wrote:
On Monday, 1 January 2018 at 11:24:45 UTC, Ola Fosheim Grøstad 
wrote:

[...]


Btw, I think one should be very sceptical of such 
presentations in general:


[...]


Come on! Ada?


Ada was designed for DoD projects…

https://www.adacore.com/sparkpro




Re: Developing blockchain software with D, not C++

2018-01-01 Thread Ola Fosheim Grøstad via Digitalmars-d
On Monday, 1 January 2018 at 11:24:45 UTC, Ola Fosheim Grøstad 
wrote:
I am not arguing with their choice, but it was for a C++ 
conference, so obviously they would praise C++…


Btw, I think one should be very sceptical of such presentations 
in general:


1.  They are there to market their project, so they have a strong 
incentive to appeal to the audience.  Offending the audience 
would be a bad strategy.


2.  Developers often have technological preferences in advance 
and will create arguments to defend their choice on rather 
subjective terms using "objective" claims.


3. The design and scope of a project tend to grow in the 
direction that the chosen platform is most suited for.


FWIW, I think the best technical choice for security related 
infrastructure would be to write a library in C or Ada or some 
other toolset where proven verification capabilities are 
available, then tie it together using whatever is most convenient 
on the platform you target.




Re: Developing blockchain software with D, not C++

2018-01-01 Thread Ola Fosheim Grøstad via Digitalmars-d

On Monday, 1 January 2018 at 10:46:23 UTC, aberba wrote:
On Sunday, 31 December 2017 at 09:45:52 UTC, Ola Fosheim 
Grøstad wrote:

On Saturday, 30 December 2017 at 16:59:41 UTC, aberba wrote:
Besides, D maturity (which I can't confirm or deny), what 
else does D miss to be considered a better alternative for 
blockchain in 2018?


You can write blockchain software in any language you want. 
The reference implementation for OpenChain seems to be written 
in C#.
That's not the point. JavaScript can also be used 
(technically), but you don't want to use it (for their use 
case). Seems you haven't watched the video so you are speaking 
out of context.


Well, the video seems to be mostly about building a large 
key-value store without caching… So whatever language has been 
used for that before would be suitable, basically any language 
that can interface well with C/Posix…


But you want static typing or verification when security is an 
issue, so Javascript is obviously out.


There is is better when you consider things in context. The 
talk was focused on a particular context.


Then you need to be a bit more clear on what you mean. You can 
make a lot of assumptions to shoehorn a project to something 
specific, e.g. database will not fit in main memory, but that 
might not hold if you control the nodes and can afford large 
amounts of memory etc. "Blockchain software" is a very vague 
requirement… What kind of load do you expect, what are the 
real-time requirements (nanoseconds vs minutes).


Remember that even Javascript can run at close to 25-50% of the 
performance of portable C… So it really depends on what you do, 
what architecture you design and what the long term development 
process looks like. Mixing languages is often the better choice 
for many projects.


I am not arguing with their choice, but it was for a C++ 
conference, so obviously they would praise C++…





Re: D as a betterC a game changer ?

2017-12-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 31 December 2017 at 14:51:24 UTC, Russel Winder wrote:
The results are based on experimental data. Read the papers 
rather than my waffle about them.


I'd love to, but I haven't found the specific paper. She seems to 
work on many different things related to software design and 
visual tooling.






Re: Developing blockchain software with D, not C++

2017-12-31 Thread Ola Fosheim Grøstad via Digitalmars-d

On Saturday, 30 December 2017 at 16:59:41 UTC, aberba wrote:
Besides, D maturity (which I can't confirm or deny), what else 
does D miss to be considered a better alternative for 
blockchain in 2018?


You can write blockchain software in any language you want. The 
reference implementation for OpenChain seems to be written in C#.


I don't think "better" in this context is a technical issue, more 
of a cultural one. Well, portability could be an issue.





Re: D as a betterC a game changer ?

2017-12-30 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 28 December 2017 at 11:56:24 UTC, Russel Winder 
wrote:
And is the way every programmer learns their non-first 
language. All newly learned programming languages are merged 
into a person's "head language" which is based on their first 
language but then evolves as new languages, especially of new 
computational mode, are learned.


See Marian Petre and others work over the last 30 years for 
scientific evidence of this.


Hm…  I have some problem with this.  I can see how it would apply 
to Algol-like languages, but for I don't see how it fits on very 
different concepts like SQL/datalog/prolog, scheme, machine 
language, OO etc…


There might be some empirical issues here as _most_ programmers 
would move to something similar, but statistical significance 
doesn't imply causality…




Re: Maybe D is right about GC after all !

2017-12-30 Thread Ola Fosheim Grøstad via Digitalmars-d
On Sunday, 24 December 2017 at 16:51:45 UTC, Patrick Schluter 
wrote:
That's the biggest problem with C++, they pile on relentlessly 
half baked feature after half baked feature in a big dump that 
no one with a life can ever grasp.


I think D has more first class language features (and thus 
special casing) than C++.  For instance, C++ lambdas are just 
sugar over objects and most of the new stuff is primarily library 
features with some minor language tweaks to back it. So I don't 
think the core C++ language itself has changed dramatically.


Like, constexpr is mostly about allowing things that were 
forbidden, but it is opening new ways to structure code, which in 
turn "deprecates" some old clumsy idioms… Except those old clumsy 
idioms linger… both in online tutorials, in code bases and of 
course in the mentality of the programmers…


Since those "deprecated" idioms are built by combining features 
it cannot easily be detected and transformed into "modern" idioms 
by a tool either. Whereas a high level dedicated language feature 
could more easily be "deprecated" and dealt with in a language 
upgrade.


Which is one downside of using library-based constructs over 
language constructs.


So I am a bit torn on library vs language features.  From an 
aesthetics point of view having a small and expressive language 
seems like the better choice, but one can make good arguments for 
a low level oriented DSL as well.


With a well designed DSL the compiler author might more easily 
reason about the programmers intent and perhaps make better 
optimization choices… thus allowing the programmer to write more 
readable performant code…


I think the DSL approach makes more sense as computer 
architecture get less simplistic.


Although currently CPU vendors target C-like code, that might 
change in the future as less code is written in C-like languages…




  1   2   3   4   5   6   7   8   9   10   >