Re: This thread on Hacker News terrifies me

2018-09-04 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/04/2018 06:35 PM, Walter Bright wrote:

Another example I read on HackerNews today:

"I recall that during their most recent s3 outage Amazon's status page 
was green across the board, because somehow all the assets that were 
supposed to be displayed when things went wrong were themselves hosted 
on the thing that was down."


https://news.ycombinator.com/item?id=17913302

This is just so basic, folks.


I absolutely hate webmail (and not being able to spin up arbitrary 
amounts of specific-purpose email addresses). So years ago, I added a 
mailserver of my own to my webserver.


I made absolutely certain to use a *gmail* account (much as I hate 
gmail) and not one of my own self-hosted accounts, as my technical 
contact for all my server-related services. It's saved my ass many times.


Re: This thread on Hacker News terrifies me

2018-09-04 Thread Walter Bright via Digitalmars-d

Another example I read on HackerNews today:

"I recall that during their most recent s3 outage Amazon's status page was green 
across the board, because somehow all the assets that were supposed to be 
displayed when things went wrong were themselves hosted on the thing that was down."


https://news.ycombinator.com/item?id=17913302

This is just so basic, folks.


Re: This thread on Hacker News terrifies me

2018-09-04 Thread Ola Fosheim Grøstad via Digitalmars-d
On Monday, 3 September 2018 at 02:58:01 UTC, Nick Sabalausky 
(Abscissa) wrote:
In the 50's/60's in particular, I imagine a much larger 
percentage of programmers probably had either some formal 
engineering background or something equally strong.


I guess some had, but my impression it that it was a rather mixed 
group (probably quite a few from physics since they got to use 
computers for calculations).


I have heard that some hired people with a music background as 
musicians understood the basic algorithmic ideas of instructions 
and loops. I.e. how to read and write instructions to be followed 
(sheet-music).


Programming by punching in numbers was pretty tedious too... so 
you would want someone vry patient.




Re: This thread on Hacker News terrifies me

2018-09-04 Thread Neia Neutuladh via Digitalmars-d

On Tuesday, 4 September 2018 at 11:21:24 UTC, Kagamin wrote:
On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky 
(Abscissa) wrote:

B. Physical interface:
--

By this I mean both actual input devices (keyboards, 
controllers, pointing devices) and also the mappings from 
their affordances (ie, what you can do with them: push button 
x, tilt stick's axis Y, point, move, rotate...) to specific 
actions taken on the visual representation (navigate, modify, 
etc.)


Also guess why Linux has problems with hardware support even 
though they have all programmers they need who can write pretty 
much anything.


Because hardware costs money, reverse engineering hardware is a 
specialized discipline, reverse engineering the drivers means you 
need twice as many people and a more rigorous process for license 
reasons, and getting drivers wrong could brick the device, 
requiring you to order another copy?


Because Linux *doesn't* have all that many programmers compared 
to the driver writers for literally every device in existence?


Re: This thread on Hacker News terrifies me

2018-09-04 Thread Kagamin via Digitalmars-d
On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky 
(Abscissa) wrote:

B. Physical interface:
--

By this I mean both actual input devices (keyboards, 
controllers, pointing devices) and also the mappings from their 
affordances (ie, what you can do with them: push button x, tilt 
stick's axis Y, point, move, rotate...) to specific actions 
taken on the visual representation (navigate, modify, etc.)


Also guess why Linux has problems with hardware support even 
though they have all programmers they need who can write pretty 
much anything.


Re: This thread on Hacker News terrifies me

2018-09-04 Thread Kagamin via Digitalmars-d
On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky 
(Abscissa) wrote:
GUI programming has been attempted a lot. (See Scratch for one 
of the latest, possibly most successful attempts). But there 
are real, practical reasons it's never made significant 
in-roads (yet).


There are really two main, but largely independent, aspects to 
what you're describing: Visual representation, and physical 
interface:


A. Visual representation:
-

By visual representation, I mean "some kind of text, or UML-ish 
diagrams, or 3D environment, etc".


What's important to keep in mind here is: The *fundamental 
concepts* involved in programming are inherently abstract, and 
thus equally applicable to whatever visual representation is 
used.


If you're going to make a diagram-based or VR-based programming 
tool, it will still be using the same fundamental concepts that 
are already established in text-based programming: Imperative 
loops, conditionals and variables. Functional/declarative 
immutability, purity and high-order funcs. Encapsulation. 
Pipelines (like ranges). Etc. And indeed, all GUI based 
programming tools have worked this way. Because how *else* are 
they going to work?


They say the main difficulty for non-programmers is control flow, 
not type system, one system was reported usable where control 
flow was represented visually, but sequential statements were 
left as plain C. E.g. we have a system administrator here who has 
no problem with powershell, but has absolutely no idea how to 
start with C#.



B. Physical interface:
--

By this I mean both actual input devices (keyboards, 
controllers, pointing devices) and also the mappings from their 
affordances (ie, what you can do with them: push button x, tilt 
stick's axis Y, point, move, rotate...) to specific actions 
taken on the visual representation (navigate, modify, etc.)


Hardware engineers are like the primary target audience for 
visual programming :)

https://en.wikipedia.org/wiki/Labview


Re: This thread on Hacker News terrifies me

2018-09-03 Thread Guillaume Piolat via Digitalmars-d
On Saturday, 1 September 2018 at 11:36:52 UTC, Walter Bright 
wrote:


I'm rather sad that I've never seen these ideas outside of the 
aerospace industry. Added to that is all the pushback on them I 
get here, on reddit, and on hackernews.




Just chiming in to say you're certainly not ignored, it's an 
article on d-idioms 
https://p0nce.github.io/d-idioms/#Unrecoverable-vs-recoverable-errors


I tend to follow your advice and let some assertions in 
production, even for a b2c product I was surprised to see it 
works well, as it's a lot better than not having bug reports and 
stay with silent failure in the wild. None of that would happen 
with "recovered" bugs.


Re: This thread on Hacker News terrifies me

2018-09-03 Thread Walter Bright via Digitalmars-d

On 9/3/2018 8:33 AM, tide wrote:
Yes why wouldn't a company want to fix a "feature" where by, if you have a 
scratch on a DVD you have to go buy another one in order to play it.


Not playing it with an appropriate message is fine. Hanging the machine is not.


It's obviously not that big of a deal breaker, even for you, considering you are 
still buying them 20 years on.


Or more likely, I buy another one now and then that will hopefully behave 
better.

I've found that different DVD players will play different damaged DVDs, i.e. one 
that will play one DVD won't play another, and vice versa.


I can get DVD players from the thrift store for $5, it's cheaper than buying a 
replacement DVD :-)




Re: This thread on Hacker News terrifies me

2018-09-03 Thread Gambler via Digitalmars-d
On 9/3/2018 1:55 PM, Gambler wrote:
> There is this VR game called Fantastic Contraption. Its interface is
> light-years ahead of anything else I've seen in VR. The point of the
> game is to design animated 3D structures that solve the problem of
> traversing various obstacles while moving from point A to point B. Is
> that not "real" programming? You make a structure to interact with an
> environment to solve a problem.

I posted this without any context, I guess. I brought this game
(http://fantasticcontraption.com) up, because:

1. It's pretty close to "programming" something useful. Sort of virtual
robotics, minus sensors.
2. The interface is intuitive, fluid and fun to use.
3. While you designs may fail, they fail in a predictable manner,
without breaking the world.
4. It's completely alien to all those horrible wire diagram environments.

Seems like we can learn a lot from it when designing future programming
environments.


Re: This thread on Hacker News terrifies me

2018-09-03 Thread Gambler via Digitalmars-d
I have to delete some quoted text to make this manageable.

On 9/2/2018 5:07 PM, Nick Sabalausky (Abscissa) wrote:
> [...]
> GUI programming has been attempted a lot. (See Scratch for one of the
> latest, possibly most successful attempts). But there are real,
> practical reasons it's never made significant in-roads (yet).
>
> There are really two main, but largely independent, aspects to what
> you're describing: Visual representation, and physical interface:
>
> A. Visual representation:
> -
>
> By visual representation, I mean "some kind of text, or UML-ish
> diagrams, or 3D environment, etc".
>
> What's important to keep in mind here is: The *fundamental concepts*
> involved in programming are inherently abstract, and thus equally
> applicable to whatever visual representation is used.

> If you're going to make a diagram-based or VR-based programming tool, it
> will still be using the same fundamental concepts that are already
> established in text-based programming: Imperative loops, conditionals
> and variables. Functional/declarative immutability, purity and
> high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And
> indeed, all GUI based programming tools have worked this way. Because
> how *else* are they going to work?>
> If what you're really looking for is something that replaces or
> transcends all of those existing, fundamental programming concepts, then
> what you're *really* looking for is a new fundamental programming
> concept, not a visual representation. And ance you DO invent a new
> fundamental programming concept, being abstract, it will again be
> applicable to a variety of possible visual representations.

Well, there are quite a few programming approaches that bypass the
concepts you've listed. For example, production (rule-based) systems and
agent-oriented programming. I've became interested in stuff like this
recently, because it looks like a legitimate way out of the mess we're
in. Among other things, I found this really interesting Ph.D. thesis
about system called LiveWorld:

http://alumni.media.mit.edu/~mt/thesis/mt-thesis.html

Interesting stuff. I believe it would work very well in VR, if
visualized properly.

> That said, it is true some concepts may be more readily amenable to
> certain visual representations than others. But, at least for all the
> currently-known concepts, any combination of concept and representation
> can certainly be made to work.
>
> B. Physical interface:
> --
>
> By this I mean both actual input devices (keyboards, controllers,
> pointing devices) and also the mappings from their affordances (ie, what
> you can do with them: push button x, tilt stick's axis Y, point, move,
> rotate...) to specific actions taken on the visual representation
> (navigate, modify, etc.)
>
> The mappings, of course, tend to be highly dependant on the visual
> representation (although, theoretically, they don't strictly HAVE to
> be). The devices themselves, less so: For example, many of us use a
> pointing device to help us navigate text. Meanwhile, 3D
> modelers/animators find it's MUCH more efficient to deal with their 3D
> models and environments by including heavy use of the keyboard in their
> workflow instead of *just* a mouse and/or wacom alone.

That depends on the the editor design. Wings 3D
(http://www.wings3d.com), for example, uses mouse for most operations.
It's done well and it's much easier to get started with than something
like Blender (which I personally hate). Designers use Wings 3D for
serious work and the interface doesn't seem to become a limitation even
for advanced use cases.

> An important point here, is that using a keyboard has a tendency to be
> much more efficient for a much wider range of interactions than, say, a
> pointing device, like a mouse or touchscreen. There are some things a
> mouse or touchscreen is better at (ie, pointing and learning curve), but
> even on a touchscreen, pointing takes more time than pushing a button
> and is somewhat less composable with additional actions than, again,
> pushing/holding a key on a keyboard.>
> This means that while pointing, and indeed, direct manipulation in
> general, can be very beneficial in an interface, placing too much
> reliance on it will actually make the user LESS productive.

I don't believe this is necessarily true. It's just that programmers and
designers today are really bad at utilizing the mouse. Most of them
aren't even aware of how the device came to be. They have no idea about
NLS or Doug Engelbart's research.

http://www.dougengelbart.org/firsts/mouse.html

They've never looked at subsequent research by Xerox and Apple.

https://www.youtube.com/watch?v=Cn4vC80Pv6Q

That last video blew my mind when I saw it. Partly because it was the
first time I realized that the five most common UI operations (cut,
copy, paste, undo, redo) have no dedicated keys on the keyboard today,
while similar operation on Xerox Star did. 

Re: This thread on Hacker News terrifies me

2018-09-03 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 20:48:27 UTC, Walter Bright 
wrote:

On 9/1/2018 5:25 AM, tide wrote:

and that all bugs can be solved with asserts


I never said that, not even close.


Are you in large implying it.

But I will maintain that DVD players still hanging on a 
scratched DVD after 20 years of development means there's some 
cowboy engineering going on, and an obvious lack of concern 
about that from the manufacturer.


Yes why wouldn't a company want to fix a "feature" where by, if 
you have a scratch on a DVD you have to go buy another one in 
order to play it. It's obviously not that big of a deal breaker, 
even for you, considering you are still buying them 20 years on.


Re: This thread on Hacker News terrifies me

2018-09-03 Thread Kagamin via Digitalmars-d
On Saturday, 1 September 2018 at 11:32:32 UTC, Jonathan M Davis 
wrote:
I think that his point was more that it's sometimes argued that 
software engineering really isn't engineering in the classical 
sense. If you're talking about someone like a civil engineer 
for instance, the engineer applies well-known and established 
principles to everything they do in a disciplined way.


If they are asked to do so. In an attempt to be fancy, the sewage 
system in my apartment doesn't have a hydraulic seal, but has a 
workaround: one pipe is flexible. How physical is that?


The engineering aspects of civil engineering aren't subjective 
at all. They're completely based in the physical sciences. 
Software engineering on the other hand isn't based on the 
physical sciences at all, and there really isn't general 
agreement on what good software engineering principles are.


Like in science, ones based on previous experience.


https://en.wikipedia.org/wiki/Software_engineering
One of the core issues in software engineering is that its 
approaches are not empirical enough because a real-world 
validation of approaches is usually absent


That criticism isn't very informed. Also is the problem really in 
how it's called?


Issues with management cause other problems on top of all of 
that, but even if you have a group of software engineers doing 
their absolute best to follow good software engineering 
principles without any kind of management interference, what 
they're doing is still very different from most engineering 
disciplines


Because hardware engineers want to pass certification. Never 
heard of what they do when they are not constrained by that? And 
even then there's a lot of funny stuff that passes certification 
like that x-ray machine and Intel processors.


and it likely wouldn't be hard for another group of competent 
software engineers to make solid arguments about why the good 
software engineering practices that they're following actually 
aren't all that good.


Anything created by humans has flaws and can be criticized.


Re: This thread on Hacker News terrifies me

2018-09-03 Thread Jonathan M Davis via Digitalmars-d
On Sunday, September 2, 2018 11:54:57 PM MDT Nick Sabalausky (Abscissa) via 
Digitalmars-d wrote:
> On 09/03/2018 12:46 AM, H. S. Teoh wrote:
> > Anything less is unsafe, because being
> > in an invalid state means you cannot predict what the program will do
> > when you try to recover it.  Your state graph may look nothing like what
> > you thought it should look like, so an action that you thought would
> > bring the program into a known state may in fact bring it into a
> > different, unknown state, which can exhibit any arbitrary behaviour.
>
> You mean attempting to doing things, like say, generate a stack trace or
> format/display the name of the Error class and a diagnostic message? ;)
>
> Not to say it's all-or-nothing of course, but suppose it IS memory
> corruption and trying to continue WILL cause some bigger problem like
> arbitrary code execution. In that case, won't the standard Error class
> stuff still just trigger that bigger problem, anyway?

Throwing an Error is a lot less likely to cause problems than actually
trying to recover. However, personally, I'm increasingly of the opinion that
the best thing to do would be to not have Errors but to kill the program at
the point of failure. That way, you could get a coredump at the point of
failure, with all of the state that goes with it, making it easier to debug,
and it would be that much less likely to cause any more problems before the
program actually exits. You might still have it print an error message and
stack trace before triggering a HLT or whatever, but I think that that's the
most that I would have it do. And while doing that would still potentially
open up problems, unless someone hijacked that specific piece of code, it
would likely be fine, and it would _really_ help on systems that don't have
coredumps enabled - not to mention seeing that in the log could make
bringing up the coredump in the debugger unnecessary in some cases.
Regardless, getting a coredump at the point of failure would be far better
IMHO than what we currently have with Errors.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/03/2018 12:46 AM, H. S. Teoh wrote:

On Sun, Sep 02, 2018 at 09:33:36PM -0700, H. S. Teoh wrote:
[...]

The reason I picked memory corruption is because it's a good
illustration of how badly things can go wrong when code that is known to
have programming bugs continue running unchecked.

[...]

P.S. And memory corruption is also a good illustration of how a logic
error in one part of the program can cause another completely unrelated
part of the program to malfunction.  The corruption could have happened
in your network stack, but it overwrites memory used by your GPU code.
You cannot simply assume that just because the network module has
nothing to do with the GPU module, that a GPU code assertion failure
cannot be caused by a memory corruption in the network module.
Therefore, you also cannot assume that an assertion in the GPU code can
be safely ignored, because by definition, the program's logic is flawed,
and so any assumptions you may have made about it may no longer be true,
and blindly continuing to run the code means the possibility of actually
executing a remote exploit instead of the GPU code you thought you were
about to execute.



Isn't that assuming the parts aren't @safe? ;)


Anything less is unsafe, because being
in an invalid state means you cannot predict what the program will do
when you try to recover it.  Your state graph may look nothing like what
you thought it should look like, so an action that you thought would
bring the program into a known state may in fact bring it into a
different, unknown state, which can exhibit any arbitrary behaviour.


You mean attempting to doing things, like say, generate a stack trace or 
format/display the name of the Error class and a diagnostic message? ;)


Not to say it's all-or-nothing of course, but suppose it IS memory 
corruption and trying to continue WILL cause some bigger problem like 
arbitrary code execution. In that case, won't the standard Error class 
stuff still just trigger that bigger problem, anyway?


Re: This thread on Hacker News terrifies me

2018-09-02 Thread H. S. Teoh via Digitalmars-d
On Sun, Sep 02, 2018 at 09:33:36PM -0700, H. S. Teoh wrote:
[...]
> The reason I picked memory corruption is because it's a good
> illustration of how badly things can go wrong when code that is known to
> have programming bugs continue running unchecked.
[...]

P.S. And memory corruption is also a good illustration of how a logic
error in one part of the program can cause another completely unrelated
part of the program to malfunction.  The corruption could have happened
in your network stack, but it overwrites memory used by your GPU code.
You cannot simply assume that just because the network module has
nothing to do with the GPU module, that a GPU code assertion failure
cannot be caused by a memory corruption in the network module.
Therefore, you also cannot assume that an assertion in the GPU code can
be safely ignored, because by definition, the program's logic is flawed,
and so any assumptions you may have made about it may no longer be true,
and blindly continuing to run the code means the possibility of actually
executing a remote exploit instead of the GPU code you thought you were
about to execute.

When the program logic is known to be flawed, by definition the program
is in an invalid state with unknown (and unknowable -- because it
implies that your assumptions were false) consequences.  The only safe
recourse is to terminate the program to get out of that state and
restart from a known safe state.  Anything less is unsafe, because being
in an invalid state means you cannot predict what the program will do
when you try to recover it.  Your state graph may look nothing like what
you thought it should look like, so an action that you thought would
bring the program into a known state may in fact bring it into a
different, unknown state, which can exhibit any arbitrary behaviour.
(This is why certain security holes are known as "arbitrary code
execution": the attacker exploits a loophole in the program's state
graph to do something the programmer never thought the program could do
-- because the programmer's assumptions turned out to be wrong.)


T

-- 
This sentence is false.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread H. S. Teoh via Digitalmars-d
On Mon, Sep 03, 2018 at 03:21:00AM +, tide via Digitalmars-d wrote:
[...]
> Any graphic problems are going to stem probably more from shaders and
> interaction with the GPU than any sort of logic code.
[...]
> What he was talking about was basically that, he was saying how it
> could be used to identify possible memory corruption, which is
> completely absurd.  That's just stretching it's use case so thin.

You misquote me. I never said asserts could be used to *identify* memory
corruption -- that's preposterous.  What I'm saying is that when an
assert failed, it *may* be caused by a memory corruption (among many
other possibilities), and that is one of the reasons why it's a bad idea
to keep going in spite of the assertion failure.

The reason I picked memory corruption is because it's a good
illustration of how badly things can go wrong when code that is known to
have programming bugs continue running unchecked.  When an assertion
fails it basically means the program has a logic error, and what the
programmer assumed the program will do is wrong.  Therefore, by
definition, you cannot predict what the program will actually do -- and
remote exploits via memory corruption is a good example of how your
program can end up doing something completely different from what it was
designed to do when you keep going in spite of logic errors.

Obviously, assertions aren't going to catch *all* memory corruptions,
but given that an assertion failure *might* be caused by a memory
corruption, why would anyone in their sane mind want to allow the
program to keep going?  We cannot catch *all* logic errors by
assertions, but why would anyone want to deliberately ignore the logic
errors that we *can* catch?


T

-- 
If creativity is stifled by rigid discipline, then it is not true creativity.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 13:21:27 UTC, Jonathan M Davis 
wrote:
On Saturday, September 1, 2018 6:37:13 AM MDT tide via 
Digitalmars-d wrote:

On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright

wrote:
> On 8/31/2018 7:28 PM, tide wrote:
>> I'm just wondering but how would you code an assert to 
>> ensure the variable for a title bar is the correct color? 
>> Just how many asserts are you going to have in your 
>> real-time game that can be expected to run at 144+ fps ?

>
> Experience will guide you on where to put the asserts.
>
> But really, just apply common sense. It's not just for 
> software. If you're a physicist, and your calculations come 
> up with a negative mass, you screwed up. If you're a 
> mechanical engineer, and calculate a force of billion pounds 
> from dropping a piano, you screwed up. If you're an 
> accountant, and calculate that you owe a million dollars in 
> taxes on a thousand dollars of income, you screwed up. If 
> you build a diagnostic X-ray machine, and the control 
> software computes a lethal dose to administer, you screwed 
> up.

>
> Apply common sense and assert on unreasonable results, 
> because your code is broken.


That's what he, and apparently you don't get. How are you 
going to use an assert to check that the color of a title bar 
is valid? Try and implement that assert, and let me know what 
you come up with.


I don't think that H. S. Teoh's point was so much that you 
should be asserting anything about the colors in the graphics 
but rather that problems in the graphics could be a sign of a 
deeper, more critical problem and that as such the fact that 
there are graphical glitches is not necessary innocuous. 
However, presumably, if you're going to put assertions in that 
code, you'd assert things about the actual logic that seems 
critical and not anything about the colors or whatnot - though 
if the graphical problems would be a sign of a deeper problem, 
then the assertions could then prevent the graphical problems, 
since the program would be killed before they happened due to 
the assertions about the core logic failing.


- Jonathan M Davis


Any graphic problems are going to stem probably more from shaders 
and interaction with the GPU than any sort of logic code. Not 
that you can really use asserts to ensure you are making calls to 
something like Vulkan correctly. There are validation layers for 
that, which are more helpful than assert would ever be. They 
still have a cost, as an example my engine runs at 60+ FPS on my 
crappy phone without the validation layers. But with them enabled 
I get roughly less than half that 10-15 fps, depending on where 
I'm looking. So using them in production code isn't exactly 
possible.



What he was talking about was basically that, he was saying how 
it could be used to identify possible memory corruption, which is 
completely absurd. That's just stretching it's use case so thin.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/02/2018 09:20 PM, Walter Bright wrote:

On 9/1/2018 8:18 PM, Nick Sabalausky (Abscissa) wrote:

[...]


My take on all this is people spend 5 minutes thinking about it and are 
confident they know it all.


Wouldn't it be nice if we COULD do that? :)

A few years back some hacker claimed they'd gotten into the Boeing 
flight control computers via the passenger entertainment system. I don't 
know the disposition of this case, but if true, such coupling of systems 
is a gigantic no-no. Some engineers would have some serious 'splainin to 
do.


Wonder if it could've just been a honeypot. (Or just someone who was 
full-of-it.) Although, I'm not sure how much point there would be to a 
honeypot if the systems really were electronically isolated.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/02/2018 07:17 PM, Gambler wrote:


But in general, I believe the statement about comparative reliability of
tech from 1970s is true. I'm perpetually impressed with is all the
mainframe software that often runs mission-critical operations in places
you would least expect.


I suspect it may be because, up until around the 90's, in order to get 
any code successfully running on the computer at all, you pretty much 
had to know at least a thing or two about how a computer works and how 
to use it. And performance/efficiency issues were REALLY obvious. Not to 
mention the institutional barriers to entry: Everyone didn't just have a 
computer in their pocket, or even in their den at home.


(Plus the machines themselves tended to be simpler: It's easier to write 
good code when a single programmer can fully understand every byte of 
the machine and their code is all that's running.)


In the 50's/60's in particular, I imagine a much larger percentage of 
programmers probably had either some formal engineering background or 
something equally strong.


But now, pretty much anyone can (and often will) cobble together 
something that more-or-less runs. Ie, there used to be a stronger 
barrier to entry, and the machines/tools tended to be less tolerant of 
problems.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Walter Bright via Digitalmars-d

On 9/1/2018 8:18 PM, Nick Sabalausky (Abscissa) wrote:

[...]


My take on all this is people spend 5 minutes thinking about it and are 
confident they know it all.


A few years back some hacker claimed they'd gotten into the Boeing flight 
control computers via the passenger entertainment system. I don't know the 
disposition of this case, but if true, such coupling of systems is a gigantic 
no-no. Some engineers would have some serious 'splainin to do.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Gambler via Digitalmars-d
On 9/1/2018 11:42 PM, Nick Sabalausky (Abscissa) wrote:
> On 09/01/2018 05:06 PM, Ola Fosheim Grøstad wrote:
>>
>> If you have a specific context (like banking) then you can develop a
>> software method that specifies how to build banking software, and
>> repeat it, assuming that the banks you develop the method for are similar
>>
>> Of course, banking has changed quite a lot over the past 15 years
>> (online + mobile). Software often operates in contexts that are
>> critically different and that change in somewhat unpredictable manners.
>>
> 
> Speaking of, that always really gets me:
> 
> The average ATM is 24/7. Sure, there may be some downtime, but what, how
> much? For the most part, these things were more or less reliable decades
> ago, from a time with *considerably* less of the "best practices" and
> accumulated experience, know-how, and tooling we have today. And over
> the years, they still don't seem to have screwed ATMs up too badly.
> 
> But contrast that to my bank's phone "app": This thing *is* rooted
> firmly in modern technology, modern experience, modern collective
> knowledge, modern hardware and...The servers it relies on *regularly* go
> down for several hours at a time during the night. That's been going on
> for the entire 2.5 years I've been using it.
> 
> And for about an hour the other day, despite using the latest update,
> most of the the buttons on the main page were *completely* unresponsive.
> Zero acknowledgement of presses whatsoever. But I could tell the app
> wasn't frozen: The custom-designed text entry boxes still handled focus
> events just fine.
> 
> Tech from 1970's: Still working fine. Tech from 2010's: Pfffbbttt!!!
> 
> Clearly something's gone horribly, horribly wrong with modern software
> development.

I wouldn't vouch for ATM reliability. You would be surprised what kinds
of garbage software they run. Think Windows XP for OS:

http://info.rippleshot.com/blog/windows-xp-still-running-95-percent-atms-world

But in general, I believe the statement about comparative reliability of
tech from 1970s is true. I'm perpetually impressed with is all the
mainframe software that often runs mission-critical operations in places
you would least expect.

Telecom systems are generally very reliable, although it feels that
started to change recently.



Re: This thread on Hacker News terrifies me

2018-09-02 Thread Ola Fosheim Grøstad via Digitalmars-d
On Sunday, 2 September 2018 at 04:59:49 UTC, Nick Sabalausky 
(Abscissa) wrote:
A. People not caring enough about their own craft to actually 
TRY to learn how to do it right.


Well, that is an issue. That many students enroll into 
programming courses, not because they take pride in writing good 
programs, but because they think that working with computers 
would somehow be an attractive career path.


Still, my impression is that students that write good programs 
also seem to be good at theory.


B. HR people who know nothing about the domain they're hiring 
for.


Well,  I think that goes beyond HR people. Also lead programmers 
in small businesses that either don't have an education or didn't 
do too well, will feel that someone that does know what they are 
doing is a threat to their position. Another issue is that 
management does not want to hire people who they think will get 
bored with their "boring" software projects... So they rather 
hire someone less apt that will not quit the job after 6 months...


So there are a lot of dysfunctional aspects at the very 
foundation of software development processes in many real world 
businesses.


I wouldn't expect anything great to come out of this... I also 
suspect that many managers don't truly understand that one good 
programmer can replace several bad ones...



C. Overall societal reliance on schooling systems that:

- Know little about teaching and learning,

- Even less about software development,


Not sure what you mean by this. In many universities you can sign 
up for the courses you are interested in. It is really up to the 
student to figure out what their profile should be.


Anyway, since there are many methodologies, you will have to 
train your own team in your specific setup. With a well rounded 
education a good student should have the knowledge that will let 
them participate in discussions about how to structure the work.


So there is really no way for any university to teach you exactly 
what the process should be like.


This is no different from other fields.  Take a sawmill; there 
are many ways to structure the manufacturing process in a 
sawmill. Hopefully people with an education is able to grok the 
process and participate in discussions about how to improve it, 
but the specifics is dependent on the concrete sawmill production 
line.






Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/01/2018 03:47 PM, Everlast wrote:


It's because programming is done completely wrong. All we do is program 
like it's 1952 all wrapped up in a nice box and bow tie. WE should have 
tools and a compiler design that all work interconnected with complete 
graphical interfaces that aren't based in the text gui world(an IDE is 
just a fancy text editor). I'm talking about 3D code representation 
using graphics so projects can be navigated  visually in a dynamic way 
and many other things.


The current programming model is reaching diminishing returns. Programs 
cannot get much more complicated because the environment in which they 
are written cannot support them(complexity != size).


We have amazing tools available to do amazing things but programming is 
still treated like punch cards, just on acid. I'd like to get totally 
away from punch cards.


I total rewrite of all aspects of programming should be done(from 
"object" files(no more, they are not needed, at least not in the form 
they are), the IDE(it should be more like a video game(in the sense of 
graphical use) and provide extensive information and debugging support 
all at a finger tip away), from the tools, to the design of 
applications, etc.


One day we will get there...



GUI programming has been attempted a lot. (See Scratch for one of the 
latest, possibly most successful attempts). But there are real, 
practical reasons it's never made significant in-roads (yet).


There are really two main, but largely independent, aspects to what 
you're describing: Visual representation, and physical interface:


A. Visual representation:
-

By visual representation, I mean "some kind of text, or UML-ish 
diagrams, or 3D environment, etc".


What's important to keep in mind here is: The *fundamental concepts* 
involved in programming are inherently abstract, and thus equally 
applicable to whatever visual representation is used.


If you're going to make a diagram-based or VR-based programming tool, it 
will still be using the same fundamental concepts that are already 
established in text-based programming: Imperative loops, conditionals 
and variables. Functional/declarative immutability, purity and 
high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And 
indeed, all GUI based programming tools have worked this way. Because 
how *else* are they going to work?


If what you're really looking for is something that replaces or 
transcends all of those existing, fundamental programming concepts, then 
what you're *really* looking for is a new fundamental programming 
concept, not a visual representation. And ance you DO invent a new 
fundamental programming concept, being abstract, it will again be 
applicable to a variety of possible visual representations.


That said, it is true some concepts may be more readily amenable to 
certain visual representations than others. But, at least for all the 
currently-known concepts, any combination of concept and representation 
can certainly be made to work.


B. Physical interface:
--

By this I mean both actual input devices (keyboards, controllers, 
pointing devices) and also the mappings from their affordances (ie, what 
you can do with them: push button x, tilt stick's axis Y, point, move, 
rotate...) to specific actions taken on the visual representation 
(navigate, modify, etc.)


The mappings, of course, tend to be highly dependant on the visual 
representation (although, theoretically, they don't strictly HAVE to 
be). The devices themselves, less so: For example, many of us use a 
pointing device to help us navigate text. Meanwhile, 3D 
modelers/animators find it's MUCH more efficient to deal with their 3D 
models and environments by including heavy use of the keyboard in their 
workflow instead of *just* a mouse and/or wacom alone.


An important point here, is that using a keyboard has a tendency to be 
much more efficient for a much wider range of interactions than, say, a 
pointing device, like a mouse or touchscreen. There are some things a 
mouse or touchscreen is better at (ie, pointing and learning curve), but 
even on a touchscreen, pointing takes more time than pushing a button 
and is somewhat less composable with additional actions than, again, 
pushing/holding a key on a keyboard.


This means that while pointing, and indeed, direct manipulation in 
general, can be very beneficial in an interface, placing too much 
reliance on it will actually make the user LESS productive.


The result:
---

For programming to transcend the current text/language model, *without* 
harming either productivity or programming power (as all attempts so far 
have done), we will first need to invent entirely new high-level 
concepts which are simultaneously both simple/high-level enough AND 
powerful enough to obsolete most of the nitty-gritty lower-level 
concepts we programmers still need to deal with on a regular basis.


And once we do 

Re: This thread on Hacker News terrifies me

2018-09-02 Thread Laeeth Isharc via Digitalmars-d
On Sunday, 2 September 2018 at 06:25:47 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 08/31/2018 07:47 PM, Jonathan M Davis wrote:


However, many
teachers really aren't great programmers. They aren't 
necessarily bad
programmers, but unless they spent a bunch of time in industry 
before
teaching, odds are that they don't have all of the software 
engineering
skills that the students are going to need once they get into 
the field. And
most courses aren't designed to teach students the practical 
skills.
This is why we really should bring back the ancient practice of 
apprenticeship, that we've mostly gotten away from.


Doesn't have to be identical to the old system in every detail, 
but who better to teach XYZ to members a new generation than 
those who ARE experts at XYZ.


Sure, teaching in and of itself is a skill, and not every 
domain expert is a good teacher. But like any skill, it can be 
learned. And after all: Who really stands a better chance at 
passing on expertise?:


A. Someone who already has the expertise, but isn't an expert 
in teaching.


B. Someone who is an expert at teaching, but doesn't posses 
what's being taught anyway.


Hint: No matter how good of a teacher you are, you can't teach 
what you don't know.


Heck, if all else fails, pair up domain experts WITH teaching 
experts! No need for any jacks-of-all-trades: When people 
become domain experts, just "apprentice" them in a secondary 
skill: Teaching their domain.


Sounds a heck of a lot better to me than the ridiculous current 
strategy of: Separate the entire population into "theory" (ie, 
Academia) and "practical" (ie, Industry) even though it's 
obvious that the *combination* of theory and practical is 
essential for any good work on either side. Have only the 
"theory" people do all the teaching for the next generation of 
BOTH "theory" and "practical" folks. Students then gain the 
"practical" side from...what, the freaking ether From the 
industry which doesn't care about quality, only profit??? From 
the "theory" folk that are never taught the "practical"??? From 
where, out of a magical freaking hat?!?!?


I agree.  I have been arguing the same for a few years now.

https://www.quora.com/With-6-million-job-openings-in-the-US-why-are-people-complaining-that-there-are-no-jobs-available/answer/Laeeth-Isharc?srid=7h

We de-emphasized degrees and those are information only unless 
for work permits it is a factor (and sadly it is) and also are 
open to hiring people with less vocationally relevant degrees.  A 
recent hire I made was a chap who studied music at Oxford and 
played the organ around the corner.  His boss is a Fellow in 
Maths at Trinity College, Cambridge and us very happy with him.


And we started hiring apprentices ourselves.  The proximate 
trigger for me to make it happen was a frustrating set of 
interviews with more career-oriented people from banks for a 
support role in London.  "Is it really asking too much to expect 
that somebody who works on computers should actually like playing 
with them ?"


So we went to a technical college nearby where someone in the 
group lives and we made a start this year and in time it will 
grow.


The government introduced an apprenticeship programme.  I don't 
think many people use it yet because it's not adapted to 
commercial factors.  But anything new is bad in the first version 
and it will get better.








Re: This thread on Hacker News terrifies me

2018-09-02 Thread Steven Schveighoffer via Digitalmars-d

On 9/1/18 6:29 AM, Shachar Shemesh wrote:

On 31/08/18 23:22, Steven Schveighoffer wrote:

On 8/31/18 3:50 PM, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps going 
in prod. Each time we want to verify a runtime assumption, we decide 
which type of assert to use. We prefer `assertAndContinue` (and I 
push for it in code review),"


e.g. D's assert. Well, actually, D doesn't log an error in production.



I think it's the music of the thing rather than the thing itself.

Mecca has ASSERT, which is a condition always checked and that always 
crashes the program if it fails, and DBG_ASSERT, which, like D's built 
in assert, is skipped in release mode (essentially, an assert where you 
can log what went wrong without using the GC needing format).


When you compare this to what Walter was quoting, you get the same end 
result, but a vastly different intention. It's one thing to say "this 
ASSERT is cheap enough to be tested in production, while this DBG_ASSERT 
one is optimized out". It's another to say "well, in production we want 
to keep going no matter what, so we'll just ignore the asserts".


Which is exactly what Phobos and Druntime do (ignore asserts in 
production). I'm not sure how the intention makes any difference.


The obvious position of D is that asserts and bounds checks shouldn't be 
used in production -- that is how we ship our libraries. It is what the 
"-release" switch does. How else could it be interpreted?


-Steve


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Patrick Schluter via Digitalmars-d
On Sunday, 2 September 2018 at 04:21:44 UTC, Jonathan M Davis 
wrote:
On Saturday, September 1, 2018 9:18:17 PM MDT Nick Sabalausky 
(Abscissa) via Digitalmars-d wrote:


So honestly, I don't find it at all surprising when an 
application can't handle not being able to write to disk. 
Ideally, it _would_ handle it (even if it's simply by shutting 
down, because it can't handle not having enough disk space), 
but for most applications, it really is thought of like running 
out of memory. So, isn't tested for, and no attempt is made to 
make it sane.


One reason why programs using stdio do fail with disk space 
errors is that they don't know that fclose() can be the function 
reporting it, not the fwrite()/fputs()/fprintf(). I can not count 
the number of times I saw things like that:


FILE *fd = fopen(...,"w");

if(fwrite(buffer, length, 1)<1) {
  fine error handling
fclose(fd);

on disk fullness the fwrite might have accepted the data, but 
only the fclose() really flushed the data to disk, only detecting 
the lack of space at that moment.



Honestly, for some of this stuff, I think that the only way 
that it's ever going to work sanely is if extreme failure 
conditions result in Errors or Exceptions being thrown, and the 
program being killed. Most code simply isn't ever going to be 
written to handle such situations, and a for a _lot_ of 
programs, they really can't continue without those resources - 
which is presumably, why the way D's GC works is to throw an 
OutOfMemoryError when it can't allocate anything. Anything 
C-based (and plenty of C++-based programs too) is going to have 
serious problems though thanks to the fact that C/C++ programs 
often use APIs where you have to check a return code, and if 
it's a function that never fails under normal conditions, most 
programs aren't going to check it. Even diligent programmers 
are bound to miss some of them.


Indeed, since some of those error checks also differ from OS to 
OS, some cases might detect things in one setting but not in 
others. See my example above, on DOS or if setvbuf() was set to 
NULL it would not possibly happen as the fwrite() would always 
flush() the data to disk and the error condition would be catched 
nearly 99.% of times.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/02/2018 02:06 AM, Joakim wrote:
On Sunday, 2 September 2018 at 05:16:43 UTC, Nick Sabalausky (Abscissa) 
wrote:


Smug as I may have been at the at the time, it wasn't until later I 
realized the REAL smart ones were the ones out partying, not the grads 
or the nerds like me.


Why? Please don't tell me you believe this nonsense:

"Wadhwa... argues (I am not joking) that partying is a valuable part of 
the college experience because it teaches students interpersonal skills."
https://www.forbes.com/sites/jerrybowyer/2012/05/22/a-college-bubble-so-big-even-the-new-york-times-and-60-minutes-can-see-it-sort-of/ 



Learning skills from partying? Hah hah, no, no, it's not about anything 
like that. :) (Social skills matter, but obviously plenty of other ways 
to practice those.)


No, it's just that honing skills isn't the only thing in life that 
matters. Simply living life while you're here is important too, for its 
own sake, even if you only realize it after the fact.


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/02/2018 12:21 AM, Jonathan M Davis wrote:


The C APIs on the other hand require that you
check the return value, and some of the C++ APIs require the same.


Heh, yea, as horrifically awful as return value errors really are, I 
have to admit, with them, at least it's actually *possible* to handle 
these low-resource situations sanely, instead of D's "not-so-right-thing 
by default" approach of *guaranteeing* that all software under the same 
circumstance just freaks out and runs screaming like KDE.


(Much as I love D, and as much as I believe in "fail fast", the "Error" 
class still irritates me to no end. My usual approach to dealing with it 
is to stick my fingers in my ears and go "La la la la la!!! It doesn't 
affect me!!! There's no such thing as non-Exceptions being thrown!!! La 
la la". Not exactly a sound engineering principle. If we actually 
HAD the mechanisms Walter advocates for *dealing* with fail-fast 
processes, then I might have a different opinion. But we *don't*, it's 
just code-n-pray for now, and nothing of the sort is even on the most 
pie-in-the-sky roadmap.)




Honestly, for some of this stuff, I think that the only way that it's ever
going to work sanely is if extreme failure conditions result in Errors or
Exceptions being thrown, and the program being killed.


Under current tools and approaches, that is, unfortunately, probably 
very true.


However...


Most code simply
isn't ever going to be written to handle such situations,


This is 2018. We all have a freaking Dick Tracy wireless supercomputer, 
that can believably simulate entire connected alternate realities, in 
realtime...in our pockets! Right now!


If we, the collective software development community of 2018, can't get 
far enough off our collective asses and *do something about* (as opposed 
to *completely ignore and never bother even so much as an automated 
test*) something as basic, obvious, *automatable*, and downright 
*timeless* as...not having our software freak out in the absence of 
resources we're not even freaking using...Well, then we, as an entire 
profession...genuinely SU*K. Hard. (And yes, I am definitely including 
myself in that judgement. I'm more than willing to change my 
prehistoric-programmer ways. But implementation-wise there's relevant 
domain experience I lack, so I can't make this all happen by myself, so 
there needs to be some buy-in.)




Anything C-based (and plenty of C++-based programs
too) is going to have serious problems though


Well, yea. No mystery or surprise there. Another reason I tend to be a 
bit dismayed at the continued popularity of those languages (not that 
I'm unaware of all the reasons for their continued popularity).


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 08/31/2018 07:47 PM, Jonathan M Davis wrote:


However, many
teachers really aren't great programmers. They aren't necessarily bad
programmers, but unless they spent a bunch of time in industry before
teaching, odds are that they don't have all of the software engineering
skills that the students are going to need once they get into the field. And
most courses aren't designed to teach students the practical skills.
This is why we really should bring back the ancient practice of 
apprenticeship, that we've mostly gotten away from.


Doesn't have to be identical to the old system in every detail, but who 
better to teach XYZ to members a new generation than those who ARE 
experts at XYZ.


Sure, teaching in and of itself is a skill, and not every domain expert 
is a good teacher. But like any skill, it can be learned. And after all: 
Who really stands a better chance at passing on expertise?:


A. Someone who already has the expertise, but isn't an expert in teaching.

B. Someone who is an expert at teaching, but doesn't posses what's being 
taught anyway.


Hint: No matter how good of a teacher you are, you can't teach what you 
don't know.


Heck, if all else fails, pair up domain experts WITH teaching experts! 
No need for any jacks-of-all-trades: When people become domain experts, 
just "apprentice" them in a secondary skill: Teaching their domain.


Sounds a heck of a lot better to me than the ridiculous current strategy 
of: Separate the entire population into "theory" (ie, Academia) and 
"practical" (ie, Industry) even though it's obvious that the 
*combination* of theory and practical is essential for any good work on 
either side. Have only the "theory" people do all the teaching for the 
next generation of BOTH "theory" and "practical" folks. Students then 
gain the "practical" side from...what, the freaking ether From the 
industry which doesn't care about quality, only profit??? From the 
"theory" folk that are never taught the "practical"??? From where, out 
of a magical freaking hat?!?!?


Re: This thread on Hacker News terrifies me

2018-09-02 Thread Joakim via Digitalmars-d
On Sunday, 2 September 2018 at 05:16:43 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 09/02/2018 12:53 AM, Jonathan M Davis wrote:


Ouch. Seriously, seriously ouch.



Heh, yea, well...that particular one was state party school, 
so, what y'gonna do? *shrug*


Smug as I may have been at the at the time, it wasn't until 
later I realized the REAL smart ones were the ones out 
partying, not the grads or the nerds like me.


Why? Please don't tell me you believe this nonsense:

"Wadhwa... argues (I am not joking) that partying is a valuable 
part of the college experience because it teaches students 
interpersonal skills."

https://www.forbes.com/sites/jerrybowyer/2012/05/22/a-college-bubble-so-big-even-the-new-york-times-and-60-minutes-can-see-it-sort-of/


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/01/2018 09:15 AM, Jonathan M Davis wrote:


I don't know if any DVD players have ever used Java, but all Blu-ray players
do require it, because unfortunately, the Blu-ray spec allows for the menus
to be done via Java (presumably so that they can be fancier than what was
possible on DVDs).



DVDs (PUOs)...BluRays (Java)...Web...

All evidence that the more technical power you give content creators, 
the bigger the design abominations they'll subject the world to ;) [1]


I actually once came across a professionally-produced, major-studio 
BluRay where pressing Pause didn't pause the video, but actually made it 
deliberately switch over to and play a *different* video instead.


This is why a good system is one that automatically does the right thing 
by default, and makes you work to do the wrong thing...if it's even 
important enough to have the escape hatch.


[1] That, of course, is not an "us-vs-them" statement: Programmers are 
by definition content creators, too!


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/02/2018 12:53 AM, Jonathan M Davis wrote:


Ouch. Seriously, seriously ouch.



Heh, yea, well...that particular one was state party school, so, what 
y'gonna do? *shrug*


Smug as I may have been at the at the time, it wasn't until later I 
realized the REAL smart ones were the ones out partying, not the grads 
or the nerds like me. Ah, young hubris ;)  (Oh, the computer art 
students, BTW, were actually really fun to hang out with! I think they 
probably managed to hit the best balance of work & play.)


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/01/2018 02:15 AM, Ola Fosheim Grøstad wrote:


The root cause of bad software is that many programmers don't even have 
an education in CS or software engineering, or didn't do a good job 
while getting it!




Meh, no. The root cause trifecta is:

A. People not caring enough about their own craft to actually TRY to 
learn how to do it right.


B. HR people who know nothing about the domain they're hiring for.

C. Overall societal reliance on schooling systems that:

- Know little about teaching and learning,

- Even less about software development,

- And can't even decide whether their priorities should be "pure 
theory *without* sufficient practical" or "catering to the 
above-mentioned HR folk's swing-and-miss, armchair-expert attempts at 
defining criteria for identifying good programmers" (Hint: The answer is 
"neither").


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 10:44:57 PM MDT Nick Sabalausky (Abscissa) 
via Digitalmars-d wrote:
> On 09/01/2018 01:51 AM, rikki cattermole wrote:
> > But in saying that, we had third year students starting out not
> > understanding how cli arguments work so...
>
> How I wish that sort of thing surprised me ;)
>
> As part of the generation that grew up with BASIC on 80's home
> computers, part of my spare time in high school involved some PalmOS
> development (man I miss PalmOS). Wasn't exactly anything special - you
> pony up a little $ for Watcom (or was it Borland?), open the IDE, follow
> the docs, do everything you normally do. Read a book. Yawn. After that,
> in college, had a job working on Palm and WAP websites (anyone remember
> those? Bonus points if you remember the Palm version - without WiFi or
> telephony it was an interesting semi-mobile experience).
>
> Imagine my shock another year after that when I saw the college I was
> attending bragging how their computer science *graduate* students...with
> the help and guidance of a professor...had gotten a hello world "running
> on an actual Palm Pilot!" Wow! Can your grad students also tie their own
> shoes and wipe their own noses with nothing more than their own wits and
> somebody else to help them do it??? Because, gee golly, that would be
> one swell accomplishment! Wow! Hold your hat, Mr. Dean, because Ivy
> League, here you come!!

Ouch. Seriously, seriously ouch.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/01/2018 01:51 AM, rikki cattermole wrote:


But in saying that, we had third year students starting out not 
understanding how cli arguments work so...




How I wish that sort of thing surprised me ;)

As part of the generation that grew up with BASIC on 80's home 
computers, part of my spare time in high school involved some PalmOS 
development (man I miss PalmOS). Wasn't exactly anything special - you 
pony up a little $ for Watcom (or was it Borland?), open the IDE, follow 
the docs, do everything you normally do. Read a book. Yawn. After that, 
in college, had a job working on Palm and WAP websites (anyone remember 
those? Bonus points if you remember the Palm version - without WiFi or 
telephony it was an interesting semi-mobile experience).


Imagine my shock another year after that when I saw the college I was 
attending bragging how their computer science *graduate* students...with 
the help and guidance of a professor...had gotten a hello world "running 
on an actual Palm Pilot!" Wow! Can your grad students also tie their own 
shoes and wipe their own noses with nothing more than their own wits and 
somebody else to help them do it??? Because, gee golly, that would be 
one swell accomplishment! Wow! Hold your hat, Mr. Dean, because Ivy 
League, here you come!!


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 08/31/2018 07:20 PM, H. S. Teoh wrote:


The problem is that there is a disconnect between academia and the
industry.

The goal in academia is to produce new research, to find ground-breaking
new theories that bring a lot of recognition and fame to the institution
when published. It's the research that will bring in the grants and
enable the institution to continue existing. As a result, there is heavy
focus on the theoretical concepts, which are the basis for further
research, rather than pragmatic tedium like how to debug a program.



I don't know where you've been but it doesn't match anything I've ever seen.

Everything I've ever seen: The goal in academia is to advertise 
impressive-looking rates for graduation and job placement. This 
maximizes the size of the application pool which, depending on the 
school, means one of two things:


1. More students paying ungodly tuition rates. Thus making the schools 
and their administrators even richer. (Pretty much any public liberal 
arts school.)


or

2. Higher quality students (defined by the school as "more likely to 
graduate and more likely to be placed directly in a job"), thus earning 
the school the right to demand an even MORE ungodly tuition from the 
fixed-size pool of students they accept. Thus making the schools and 
their administrators even richer. (Pretty much any private liberal arts 
school.)


Achieving the coveted prize of "We look attractive to applicants" involves:

First: As much of a revolving-door system as they can get away with 
without jeopardizing their accreditation.


And secondly: Supplementing the basic Computer Science theory with 
awkward, stumbling, half-informed attempts at placating the industry's 
brain-dead, know-nothing HR monkeys[1] with the latest hot trends. For 
me, at the time, that meant Java 2 and the "Thou must OOP, for OOP is 
all" religion.


[1] "I don't know anything about programming, but I'm good at 
recognizing people who are good at it."  <-- A real quote from a real HR 
monkey I once made the mistake of attempting basic communication with.


But then, let's not forget that schools have HR, too. Which leads to 
really fun teachers like the professor I had for a Computer Networking 
class:


He had a PhD in Computer Science. He would openly admit that C was the 
only language he knew. Ok, fair enough so far. But...upon my explaining 
to him how he made a mistake grading my program, I found *myself* forced 
to teach the *Computer Science professor* how strings 
(remember...C...null-terminated) worked in the *only* language he knew. 
He had NO freaking clue! A freakin' CS PhD! Forget "theory vs practical" 
- if you do not know the *fundamental basics* of EVEN ONE language, then 
you *CANNOT* function in even the theoretical or research realms, or 
teach it. Computer science doesn't even *exist* without computer 
programming! Oh, and this, BTW, was a school that pretty much any 
Clevelander will tell you "Oh! Yea, that's a really good school, it has 
a fantastic reputation!" Compared to what? Ohio State Football University?


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 9:18:17 PM MDT Nick Sabalausky (Abscissa) via 
Digitalmars-d wrote:
> On 08/31/2018 03:50 PM, Walter Bright wrote:
> [From your comment in that thread]
>
>  > fill up your system disk to near capacity, then try to run various
>
> apps and system utilities.
>
> I've had that happen on accident once or twice recently. KDE does NOT
> handle it well: *Everything* immediately either hangs or dies as soon as
> it gains focus. Well, I guess could be worse, but it still really irks
> me: "Seriously, KDE? You can't even DO NOTHING without trying to write
> to the disk? And you, you other app specifically designed for dealing
> with large numbers of large files, why in the world would you attempt to
> write GB+ files without ever checking available space?"

I suspect that if KDE is choking, it's due to issues with files in /tmp,
since they like to use temp files for stuff, and I _think_ that some of it
is using unix sockets, in which case they're using the socket API to talk
between components, and I wouldn't ever expect anyone to check disk space
with that - though I _would_ expect them to check for failed commands and
handling it appropriately, even if the best that they can do is close the
program with a pop-up.

I think that what it ultimately comes down to though is that a lot of
applications treat disk space like they treat memory. You don't usually
check whether you have enough memory. At best, you check whether a
particular memory allocation succeeded and then try to handle it sanely if
it failed. With D, we usually outright kill the program if we fail to
allocate memory - and really, if you're using std.stdio and std.file for all
of your file operations, you'll probably get the same thing, since an
exception would be thrown on write failure, and if you didn't catch it, then
it will kill your program (though if you do catch it, it obviously can vary
considerably what happens). The C APIs on the other hand require that you
check the return value, and some of the C++ APIs require the same. So, if
you're not doing that right, you can quickly get your program into a weird
state if functions that you expect to always succeed start failing.

So honestly, I don't find it at all surprising when an application can't
handle not being able to write to disk. Ideally, it _would_ handle it (even
if it's simply by shutting down, because it can't handle not having enough
disk space), but for most applications, it really is thought of like running
out of memory. So, isn't tested for, and no attempt is made to make it sane.

I would have hoped that something like KDE would have sorted it out by now
given that it's been around long enough that more than one person would have
run into the problem and complained about it, but given that it's a suite of
applications developed in someone's free time, it wouldn't surprise me at
all if the response was to just get more disk space.

Honestly, for some of this stuff, I think that the only way that it's ever
going to work sanely is if extreme failure conditions result in Errors or
Exceptions being thrown, and the program being killed. Most code simply
isn't ever going to be written to handle such situations, and a for a _lot_
of programs, they really can't continue without those resources - which is
presumably, why the way D's GC works is to throw an OutOfMemoryError when it
can't allocate anything. Anything C-based (and plenty of C++-based programs
too) is going to have serious problems though thanks to the fact that C/C++
programs often use APIs where you have to check a return code, and if it's a
function that never fails under normal conditions, most programs aren't
going to check it. Even diligent programmers are bound to miss some of them.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 09/01/2018 05:06 PM, Ola Fosheim Grøstad wrote:


If you have a specific context (like banking) then you can develop a 
software method that specifies how to build banking software, and repeat 
it, assuming that the banks you develop the method for are similar


Of course, banking has changed quite a lot over the past 15 years 
(online + mobile). Software often operates in contexts that are 
critically different and that change in somewhat unpredictable manners.




Speaking of, that always really gets me:

The average ATM is 24/7. Sure, there may be some downtime, but what, how 
much? For the most part, these things were more or less reliable decades 
ago, from a time with *considerably* less of the "best practices" and 
accumulated experience, know-how, and tooling we have today. And over 
the years, they still don't seem to have screwed ATMs up too badly.


But contrast that to my bank's phone "app": This thing *is* rooted 
firmly in modern technology, modern experience, modern collective 
knowledge, modern hardware and...The servers it relies on *regularly* go 
down for several hours at a time during the night. That's been going on 
for the entire 2.5 years I've been using it.


And for about an hour the other day, despite using the latest update, 
most of the the buttons on the main page were *completely* unresponsive. 
Zero acknowledgement of presses whatsoever. But I could tell the app 
wasn't frozen: The custom-designed text entry boxes still handled focus 
events just fine.


Tech from 1970's: Still working fine. Tech from 2010's: Pfffbbttt!!!

Clearly something's gone horribly, horribly wrong with modern software 
development.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 08/31/2018 05:09 PM, H. S. Teoh wrote:


It's precisely for this reason that the title "software engineer" makes
me cringe on the one hand, and snicker on the other hand.  I honestly
cannot keep a straight face when using the word "engineering" to
describe what a typical programmer does in the industry these days.



Science is the reverse-engineering of reality to understand how it 
works. Engineering is the practical application of scientific knowledge.


I don't know, maybe those are simplified, naive definitions. But I've 
long been of the opinion that programming is engineering...*if* and only 
if...you're doing it right.


Of course, my background is primarily from programming itself, not from 
an existing engineering field, so I certainly won't claim that what I do 
necessarily qualifies as "engineering", but it is something I try to 
aspire to, FWIW.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 08/31/2018 03:50 PM, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps going in 
prod. Each time we want to verify a runtime assumption, we decide which 
type of assert to use. We prefer `assertAndContinue` (and I push for it 
in code review),"




Yea, that one makes me cringe. I could at least understand "unwind the 
stack 'till you're at least out of this subsystem, and THEN MAYBE 
abort/retry (but not ignore)", though I know you disagree on that. But 
to just...continue as if nothing happened...Ugh. Just reminds me of 
common dynamic scripting language design and why I never liked those 
languages: If the programmer wrote something nonsensical, best to do 
something completely random instead of giving them an error message!



"Stopping all executing may not be the correct 'safe state' for an 
airplane though!"


Honestly, comments like this suggest to me someone who's operating under 
the false assumption that "stop all executing" means "permanently stop 
all of the software running on all components of the plane" rather than 
"stop (and possibly restart) one of several redundant versions of one 
particular subsystem". Which suggests they only read comments and not 
the article.


Interestingly, the same user also said:

"Software development often does seem like a struggle between 
reliability/robustness and safety/correctness."


WAT?!

That's like saying, "Speaker design often seems like a struggle between 
loudness versus volume." Each one *requires* the other.


Scary.



"One faction believed you should never intentionally crash the app"


I can understand how people may naively come to that conclusion: "Duh, 
crashing is bad, so why would you do it intentionally?" But, of course, 
the reasoning is faulty.


There's also the "It depends on your industry/audience. You're talking 
airplanes, but my software isn't critical enough to bother with the same 
principles." I wonder if it might help to remind such people that's 
*exactly* how MS ended up with Windows Me:


This is well-known:

After Win3.11, MS decided that businesses required more reliability from 
their OS than the home users needed. So they split Windows into two 
product lines: WinNT for business (more focus on reliability) and Win95 
for home (speed and features were more important).


Things started out mostly ok. Win95 wasn't quite as reliable as NT, but 
not a gigantic difference, and it was expected. Then Win98...some more 
issues, while NT stayed more or less as-was. Then WinMe hit. BOOM!


By that point, the latest in the WinNT line was "Win2k", which was STILL 
regarded as pretty well stable, so MS did what's probably the smartest 
move they've ever made: Killed off the 9x/Me codebase, added DirectX to 
Win2k and called it "WinXP". And it spent a whole decade widely hailed 
as the first ever home version of Windows to not be horrible.


So yea, I don't care how non-critical you think your software is. If 
it's worth using, then it's important enough.


And on and on. It's unbelievable. The conventional wisdom in software 
for how to deal with programming bugs simply does not exist.


In my observation, there doesn't seem to be much conventional wisdom in 
software in general. Everything, no matter how basic or seemingly 
obvious, is up for big, major debate. (Actually, not even restricted to 
programming.)



[From your comment in that thread]
> fill up your system disk to near capacity, then try to run various 
apps and system utilities.


I've had that happen on accident once or twice recently. KDE does NOT 
handle it well: *Everything* immediately either hangs or dies as soon as 
it gains focus. Well, I guess could be worse, but it still really irks 
me: "Seriously, KDE? You can't even DO NOTHING without trying to write 
to the disk? And you, you other app specifically designed for dealing 
with large numbers of large files, why in the world would you attempt to 
write GB+ files without ever checking available space?"


Seriously, nothing in tech ever improves. Every step forward comes with 
a badly-rationalized step back. Things just get shuffled around, rubble 
gets bounced, trends get obsessively chased in circles, and ultimately 
there's little, if any, overall progress. "What Andy giveth, Bill taketh 
away." Replace Andy/Bill with any one of thousands of different 
pairings, it still holds.


And there's no motivation for any of it to change. Capitalism rewards 
those who make more money by selling more flashy garbage that's bad 
enough to create more need for more garbage to deal with the flaws from 
the last round of garbage. It doesn't reward those who make a better 
product that actually reduces need for more. Sometimes something decent 
will come along, and briefly succeed by virtue of being good. But it's 
temporary and inevitably gets killed off by the next positive feedback 
loop of inferiority. 

Re: This thread on Hacker News terrifies me

2018-09-01 Thread Norm via Digitalmars-d
On Saturday, 1 September 2018 at 20:48:27 UTC, Walter Bright 
wrote:

On 9/1/2018 5:25 AM, tide wrote:

and that all bugs can be solved with asserts


I never said that, not even close.

But I will maintain that DVD players still hanging on a 
scratched DVD after 20 years of development means there's some 
cowboy engineering going on, and an obvious lack of concern 
about that from the manufacturer.


Firstly, you have to take into account the context around why 
that bug exists and why it is not fixed and it comes does to a 
risk-cost trade off.


Product managers are totally driven by budget and in consumer 
goods they dictate the engineering resources. I think you'll find 
most large DVD manufactures have discovered that it is not cost 
effective to give engineers the budget to fix these annoying 
bugs. This is because most consumers will be annoyed but then go 
out and purchase som other product by the same manufacturer. I.e. 
these bugs do not harm their brand enough.


This leads to the situation where the engineering is shoddy not 
because the programmers are bad engineers, but because they don't 
even get the chance to engineer due to time constraints.


Secondly, DVD players and critical flight systems are apples and 
oranges in terms of engineering rigor required. One will mildly 
annoy the odd consumer, who 9 times of 10 will still purchase 
 products again and the other will likely kill 
100s of people.


To put it another way; one will give the engineers *zero* 
resources to work on non-blocking bugs and the other must have 
*zero* non-blocking bugs.


Cheers,
Norm


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Gambler via Digitalmars-d
On 9/1/2018 7:36 AM, Walter Bright wrote:
> On 9/1/2018 2:15 AM, Paulo Pinto wrote:
>> On Saturday, 1 September 2018 at 08:19:49 UTC, Walter Bright wrote:
>>> On 8/31/2018 11:59 PM, Paulo Pinto wrote:
> For example, in any CS program, are there any courses at all about
> this?
 Yes, we had them on my degree,
>>>
>>> I'm curious how the courses you took compared with the articles I
>>> wrote about it.
>>
>> I will read the articles later, but as overview, we learned about:
>> [...]
> It appears to have nothing related to what the articles are about.
> 
> I'm rather sad that I've never seen these ideas outside of the aerospace
> industry. Added to that is all the pushback on them I get here, on
> reddit, and on hackernews.

Some people do. I should take this opportunity to plug one of Alan Kay's
great talks.

"Programming and Scaling"
https://www.youtube.com/watch?v=YyIQKBzIuBY

But Kay definitely isn't a Hacker News/Reddit darling, even though he's
well respected. He's too critical of the current state of software
engineering. Rightfully so, if you ask me.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 2:33 PM, Gambler wrote:

Alan Kay, Joe Armstrong, Jim Coplien - just to name a few famous people
who talked about this issue. It's amazing that so many engineers still
don't get it. I'm inclined to put some blame on the recent TDD movement.
They often to seem stress low-level code perfectionism, while ignoring
high-level architecture and runtime resilience (in other words, system
thinking).


Yup. The worst notions in the industry are:

1. We can make this software bug-free by using Fad Technique X.

2. Since our software has no bugs in it, we needn't concern ourselves with what 
happens when it fails.


3. If it does fail, since we have no plan for that due to (2), we must let it 
proceed anyway.


What could possibly go wrong?



Re: This thread on Hacker News terrifies me

2018-09-01 Thread Gambler via Digitalmars-d
On 8/31/2018 3:50 PM, Walter Bright wrote:
> https://news.ycombinator.com/item?id=17880722
> 
> Typical comments:
> 
> "`assertAndContinue` crashes in dev and logs an error and keeps going in
> prod. Each time we want to verify a runtime assumption, we decide which
> type of assert to use. We prefer `assertAndContinue` (and I push for it
> in code review),"
> 
> "Stopping all executing may not be the correct 'safe state' for an
> airplane though!"
> 
> "One faction believed you should never intentionally crash the app"
> 
> "One place I worked had a team that was very adamant about not really
> having much error checking. Not much of any qc process, either. Wait for
> someone to complain about bad data and respond. Honestly, this worked
> really well for small, skunkworks type projects that needed to be nimble."
> 
> And on and on. It's unbelievable. The conventional wisdom in software
> for how to deal with programming bugs simply does not exist.
> 
> Here's the same topic on Reddit with the same awful ideas:
> 
> https://www.reddit.com/r/programming/comments/9bl72d/assertions_in_production_code/
> 
> 
> No wonder that DVD players still hang when you insert a DVD with a
> scratch on it, and I've had a lot of DVD and Bluray players over the
> last 20 years. No wonder that malware is everywhere.

All too true.

A while ago I worked for a large financial company.

Many production systems had zero monitoring. A server with networking
issues could continue to misbehave _for hours_ until someone somewhere
noticed thousands of error messages and manually intervened.

There were also very few data quality checks. Databases could have
duplicate records, missing records or obviously inconsistent
information. Most systems just continued to process corrupt data as if
nothing happened, propagating it further and further.

Some crucial infrastructure had no usable data backups.

With all this in mind, you would be surprised to hear how much they
talked about "software quality". It's just that their notion of quality
revolved around having no bugs ever go into production and never
bringing down any systems. There were ever increasing requirements
around unit test coverage, opinionated coding standards and a lot of
paperwork associated with every change.

Needless to say, it didn't work very well, and they had round half a
dozen outages of varying sizes _every day_.

Alan Kay, Joe Armstrong, Jim Coplien - just to name a few famous people
who talked about this issue. It's amazing that so many engineers still
don't get it. I'm inclined to put some blame on the recent TDD movement.
They often to seem stress low-level code perfectionism, while ignoring
high-level architecture and runtime resilience (in other words, system
thinking).


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 1 September 2018 at 11:32:32 UTC, Jonathan M Davis 
wrote:
I'm not sure that I really agree that software engineering 
isn't engineering, but the folks who argue against it do have a 
point in that software engineering is definitely not like most 
other engineering disciplines, and good engineering practices 
are nowhere near as well-defined in software engineering as 
those in other engineering fields.


Most engineering fields have a somewhat stable/slow moving 
context in which they operate.


If you have a specific context (like banking) then you can 
develop a software method that specifies how to build banking 
software, and repeat it, assuming that the banks you develop the 
method for are similar


Of course, banking has changed quite a lot over the past 15 years 
(online + mobile). Software often operates in contexts that are 
critically different and that change in somewhat unpredictable 
manners.


But road engineers have a somewhat more stable context, they can 
adapt their methodology over time. Context does change even 
there, but at a more predictable pace.


Of course, this might be primarily because computers are new. 
Businesses tend to use software/robotics in a disruptive manner 
to get a competitive edger over the competitors. So the users 
themselves creates disruptive contexts in their search for the 
"cutting edge"  or "competitive edge".


As it becomes more and more intertwined into how people do 
business it might become more stable and then you might see 
methods for specific fields that are more like engineering in 
older established fields. (like building railways).




Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 5:33 AM, tide wrote:

It is vastly different, do you know what fly by wire is?


Yes, I do. Do you know I worked for three years on critical flight controls 
systems at Boeing? I said so in the article(s). These ideas are not mine, I did 
not come up with them in 5 minutes at the keyboard. They're central to the 
aerospace design industry, and their fantastic safety record is testament to how 
effective they are.



If the system controlling that just stops working, how do you 
expect the pilot to fly the plane?


https://www.digitalmars.com/articles/b40.html

May I draw your attention back to the "Dual Path" section.

Fly by wire failures are no different in principle from a total hydraulic system 
failure on a fully powered flight control system.


Airliners do suffer total hydraulic failures now and then, despite being unable 
to fly without hydraulics, yet land safely. I'll give you one guess how that is 
done :-) I can give you a brief rundown on how it works after you guess.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 5:25 AM, tide wrote:

and that all bugs can be solved with asserts


I never said that, not even close.

But I will maintain that DVD players still hanging on a scratched DVD after 20 
years of development means there's some cowboy engineering going on, and an 
obvious lack of concern about that from the manufacturer.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 1:18 AM, Walter Bright wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure the variable for 
a title bar is the correct color? Just how many asserts are you going to have 
in your real-time game that can be expected to run at 144+ fps ?

[...]


John Regehr has some excellent advice, much better than mine:

https://blog.regehr.org/archives/1091


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Everlast via Digitalmars-d

On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps 
going in prod. Each time we want to verify a runtime 
assumption, we decide which type of assert to use. We prefer 
`assertAndContinue` (and I push for it in code review),"


"Stopping all executing may not be the correct 'safe state' for 
an airplane though!"


"One faction believed you should never intentionally crash the 
app"


"One place I worked had a team that was very adamant about not 
really having much error checking. Not much of any qc process, 
either. Wait for someone to complain about bad data and 
respond. Honestly, this worked really well for small, 
skunkworks type projects that needed to be nimble."


And on and on. It's unbelievable. The conventional wisdom in 
software for how to deal with programming bugs simply does not 
exist.


Here's the same topic on Reddit with the same awful ideas:

https://www.reddit.com/r/programming/comments/9bl72d/assertions_in_production_code/

No wonder that DVD players still hang when you insert a DVD 
with a scratch on it, and I've had a lot of DVD and Bluray 
players over the last 20 years. No wonder that malware is 
everywhere.


It's because programming is done completely wrong. All we do is 
program like it's 1952 all wrapped up in a nice box and bow tie. 
WE should have tools and a compiler design that all work 
interconnected with complete graphical interfaces that aren't 
based in the text gui world(an IDE is just a fancy text editor). 
I'm talking about 3D code representation using graphics so 
projects can be navigated  visually in a dynamic way and many 
other things.


The current programming model is reaching diminishing returns. 
Programs cannot get much more complicated because the environment 
in which they are written cannot support them(complexity != size).


We have amazing tools available to do amazing things but 
programming is still treated like punch cards, just on acid. I'd 
like to get totally away from punch cards.


I total rewrite of all aspects of programming should be done(from 
"object" files(no more, they are not needed, at least not in the 
form they are), the IDE(it should be more like a video game(in 
the sense of graphical use) and provide extensive information and 
debugging support all at a finger tip away), from the tools, to 
the design of applications, etc.


One day we will get there...



Re: This thread on Hacker News terrifies me

2018-09-01 Thread rikki cattermole via Digitalmars-d

On 02/09/2018 1:15 AM, Jonathan M Davis wrote:

On Saturday, September 1, 2018 6:46:38 AM MDT rikki cattermole via
Digitalmars-d wrote:

On 02/09/2018 12:21 AM, tide wrote:

On Saturday, 1 September 2018 at 05:53:12 UTC, rikki cattermole wrote:

On 01/09/2018 12:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:

I don't think I've ever had a **game** hung up in a black screen
and not be able to close it.


I've had that problem with every **DVD player** I've had in the last
20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code aren't even
written with a complete C compiler or libraries.


And yet they manage to run a JVM with Java on it.


Not the one's Walter is talking about. I rarely have to power cycle any
smart device, even my phone which is running so much shit on it.


For some reason I have memories related to DVD players containing a JVM
to provide interactivity. But it doesn't look like those memory were
based on anything. So ignore me.


I don't know if any DVD players have ever used Java, but all Blu-ray players
do require it, because unfortunately, the Blu-ray spec allows for the menus
to be done via Java (presumably so that they can be fancier than what was
possible on DVDs).

- Jonathan M Davis


Harry potter 1&2 had games as part of their menus as of 2001/2, so it 
was already pretty sophisticated.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 6:37:13 AM MDT tide via Digitalmars-d wrote:
> On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright
>
> wrote:
> > On 8/31/2018 7:28 PM, tide wrote:
> >> I'm just wondering but how would you code an assert to ensure
> >> the variable for a title bar is the correct color? Just how
> >> many asserts are you going to have in your real-time game that
> >> can be expected to run at 144+ fps ?
> >
> > Experience will guide you on where to put the asserts.
> >
> > But really, just apply common sense. It's not just for
> > software. If you're a physicist, and your calculations come up
> > with a negative mass, you screwed up. If you're a mechanical
> > engineer, and calculate a force of billion pounds from dropping
> > a piano, you screwed up. If you're an accountant, and calculate
> > that you owe a million dollars in taxes on a thousand dollars
> > of income, you screwed up. If you build a diagnostic X-ray
> > machine, and the control software computes a lethal dose to
> > administer, you screwed up.
> >
> > Apply common sense and assert on unreasonable results, because
> > your code is broken.
>
> That's what he, and apparently you don't get. How are you going
> to use an assert to check that the color of a title bar is valid?
> Try and implement that assert, and let me know what you come up
> with.

I don't think that H. S. Teoh's point was so much that you should be
asserting anything about the colors in the graphics but rather that problems
in the graphics could be a sign of a deeper, more critical problem and that
as such the fact that there are graphical glitches is not necessary
innocuous. However, presumably, if you're going to put assertions in that
code, you'd assert things about the actual logic that seems critical and not
anything about the colors or whatnot - though if the graphical problems
would be a sign of a deeper problem, then the assertions could then prevent
the graphical problems, since the program would be killed before they
happened due to the assertions about the core logic failing.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 6:46:38 AM MDT rikki cattermole via 
Digitalmars-d wrote:
> On 02/09/2018 12:21 AM, tide wrote:
> > On Saturday, 1 September 2018 at 05:53:12 UTC, rikki cattermole wrote:
> >> On 01/09/2018 12:40 PM, tide wrote:
> >>> On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:
>  On 8/31/2018 2:40 PM, tide wrote:
> > I don't think I've ever had a **game** hung up in a black screen
> > and not be able to close it.
> 
>  I've had that problem with every **DVD player** I've had in the last
>  20 years. Power cycling is the only fix.
> >>>
> >>> Two very different things, odds are your DVD players code aren't even
> >>> written with a complete C compiler or libraries.
> >>
> >> And yet they manage to run a JVM with Java on it.
> >
> > Not the one's Walter is talking about. I rarely have to power cycle any
> > smart device, even my phone which is running so much shit on it.
>
> For some reason I have memories related to DVD players containing a JVM
> to provide interactivity. But it doesn't look like those memory were
> based on anything. So ignore me.

I don't know if any DVD players have ever used Java, but all Blu-ray players
do require it, because unfortunately, the Blu-ray spec allows for the menus
to be done via Java (presumably so that they can be fancier than what was
possible on DVDs).

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 13:03:50 UTC, rikki cattermole 
wrote:

On 02/09/2018 12:57 AM, tide wrote:
On Saturday, 1 September 2018 at 12:49:12 UTC, rikki 
cattermole wrote:

On 02/09/2018 12:37 AM, tide wrote:
On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright 
wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to 
ensure the variable for a title bar is the correct color? 
Just how many asserts are you going to have in your 
real-time game that can be expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for 
software. If you're a physicist, and your calculations come 
up with a negative mass, you screwed up. If you're a 
mechanical engineer, and calculate a force of billion 
pounds from dropping a piano, you screwed up. If you're an 
accountant, and calculate that you owe a million dollars in 
taxes on a thousand dollars of income, you screwed up. If 
you build a diagnostic X-ray machine, and the control 
software computes a lethal dose to administer, you screwed 
up.


Apply common sense and assert on unreasonable results, 
because your code is broken.


That's what he, and apparently you don't get. How are you 
going to use an assert to check that the color of a title 
bar is valid? Try and implement that assert, and let me know 
what you come up with.


If you have the ability to screenshot a window like I do, oh 
one simple method call is all that required with a simple 
index to get the color.


But that isn't something I'd go test... Too much system-y 
stuff that can modify it.


And you're putting that into production code? Cause that's the 
entire point of this topic :).


like Walter has been arguing, are better left untested in 
production.


That's not what Walter has been arguing.





Re: This thread on Hacker News terrifies me

2018-09-01 Thread rikki cattermole via Digitalmars-d

On 02/09/2018 12:57 AM, tide wrote:

On Saturday, 1 September 2018 at 12:49:12 UTC, rikki cattermole wrote:

On 02/09/2018 12:37 AM, tide wrote:

On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure the 
variable for a title bar is the correct color? Just how many 
asserts are you going to have in your real-time game that can be 
expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for software. If 
you're a physicist, and your calculations come up with a negative 
mass, you screwed up. If you're a mechanical engineer, and calculate 
a force of billion pounds from dropping a piano, you screwed up. If 
you're an accountant, and calculate that you owe a million dollars 
in taxes on a thousand dollars of income, you screwed up. If you 
build a diagnostic X-ray machine, and the control software computes 
a lethal dose to administer, you screwed up.


Apply common sense and assert on unreasonable results, because your 
code is broken.


That's what he, and apparently you don't get. How are you going to 
use an assert to check that the color of a title bar is valid? Try 
and implement that assert, and let me know what you come up with.


If you have the ability to screenshot a window like I do, oh one 
simple method call is all that required with a simple index to get the 
color.


But that isn't something I'd go test... Too much system-y stuff that 
can modify it.


And you're putting that into production code? Cause that's the entire 
point of this topic :).


Goodness no. I can BSOD Windows just by writing user-land code that 
pretty much every program uses (yes related). Some things can definitely 
be tested in an assert, however like Walter has been arguing, are better 
left untested in production.


Keep in mind, a window whose decorations has changed color, is a very 
reasonable and should be expected situation. It is no where near an error.


This is one of the reasons I wouldn't bother with automated UI testing. 
Too many things can make it fail that is not related to your code, and 
integration may as well not exist cross-platform anyway. Let alone be 
well defined.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 12:49:12 UTC, rikki cattermole 
wrote:

On 02/09/2018 12:37 AM, tide wrote:
On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright 
wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to 
ensure the variable for a title bar is the correct color? 
Just how many asserts are you going to have in your 
real-time game that can be expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for 
software. If you're a physicist, and your calculations come 
up with a negative mass, you screwed up. If you're a 
mechanical engineer, and calculate a force of billion pounds 
from dropping a piano, you screwed up. If you're an 
accountant, and calculate that you owe a million dollars in 
taxes on a thousand dollars of income, you screwed up. If you 
build a diagnostic X-ray machine, and the control software 
computes a lethal dose to administer, you screwed up.


Apply common sense and assert on unreasonable results, 
because your code is broken.


That's what he, and apparently you don't get. How are you 
going to use an assert to check that the color of a title bar 
is valid? Try and implement that assert, and let me know what 
you come up with.


If you have the ability to screenshot a window like I do, oh 
one simple method call is all that required with a simple index 
to get the color.


But that isn't something I'd go test... Too much system-y stuff 
that can modify it.


And you're putting that into production code? Cause that's the 
entire point of this topic :).


Re: This thread on Hacker News terrifies me

2018-09-01 Thread rikki cattermole via Digitalmars-d

On 02/09/2018 12:21 AM, tide wrote:

On Saturday, 1 September 2018 at 05:53:12 UTC, rikki cattermole wrote:

On 01/09/2018 12:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black screen 
and not be able to close it.


I've had that problem with every **DVD player** I've had in the last 
20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code aren't even 
written with a complete C compiler or libraries.


And yet they manage to run a JVM with Java on it.


Not the one's Walter is talking about. I rarely have to power cycle any 
smart device, even my phone which is running so much shit on it.


For some reason I have memories related to DVD players containing a JVM 
to provide interactivity. But it doesn't look like those memory were 
based on anything. So ignore me.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread rikki cattermole via Digitalmars-d

On 02/09/2018 12:37 AM, tide wrote:

On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure the 
variable for a title bar is the correct color? Just how many asserts 
are you going to have in your real-time game that can be expected to 
run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for software. If 
you're a physicist, and your calculations come up with a negative 
mass, you screwed up. If you're a mechanical engineer, and calculate a 
force of billion pounds from dropping a piano, you screwed up. If 
you're an accountant, and calculate that you owe a million dollars in 
taxes on a thousand dollars of income, you screwed up. If you build a 
diagnostic X-ray machine, and the control software computes a lethal 
dose to administer, you screwed up.


Apply common sense and assert on unreasonable results, because your 
code is broken.


That's what he, and apparently you don't get. How are you going to use 
an assert to check that the color of a title bar is valid? Try and 
implement that assert, and let me know what you come up with.


If you have the ability to screenshot a window like I do, oh one simple 
method call is all that required with a simple index to get the color.


But that isn't something I'd go test... Too much system-y stuff that can 
modify it.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright 
wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure 
the variable for a title bar is the correct color? Just how 
many asserts are you going to have in your real-time game that 
can be expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for 
software. If you're a physicist, and your calculations come up 
with a negative mass, you screwed up. If you're a mechanical 
engineer, and calculate a force of billion pounds from dropping 
a piano, you screwed up. If you're an accountant, and calculate 
that you owe a million dollars in taxes on a thousand dollars 
of income, you screwed up. If you build a diagnostic X-ray 
machine, and the control software computes a lethal dose to 
administer, you screwed up.


Apply common sense and assert on unreasonable results, because 
your code is broken.


That's what he, and apparently you don't get. How are you going 
to use an assert to check that the color of a title bar is valid? 
Try and implement that assert, and let me know what you come up 
with.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 08:05:58 UTC, Walter Bright 
wrote:

On 8/31/2018 5:47 PM, tide wrote:
I've already read them before. Why don't you explain what is 
wrong with it rather than posting articles.


Because the articles explain the issues at length. Explaining 
why your proposal is deeply flawed was the entire purpose I 
wrote them.


I didn't write a proposal. I was explaining a flaw in your 
proposal.


You are just taking one line comments without even thinking 
about the context.


We can start with the observation that a fly-by-wire is not a 
fundamentally different system than a fully powered hydraulic 
system or even a pilot muscle cable system, when we're talking 
about safety principles.


It is vastly different, do you know what fly by wire is? It means 
the computer is taking input digitally and applying the commands 
from the digital input into actual output. If the system 
controlling that just stops working, how do you expect the pilot 
to fly the plane? While all they are doing is moving a digital 
sensor that is doing nothing because the system that reads it 
input hit an assert.


There's nothing magic about software. It's just more 
complicated (a fact that makes it even MORE important to adhere 
to sound principles, not throw them out the window).


I didn't say to throw them the door, I'm saying there's a lot of 
different ways to do things. And using asserts isn't the one ring 
to rule all safety measures. There are different methods, and 
depending on the application, as with anything, has it's pros and 
cons where a different method will be more suitable.




Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 07:59:27 UTC, Walter Bright 
wrote:

On 8/31/2018 5:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black 
screen and not be able to close it.


I've had that problem with every **DVD player** I've had in 
the last 20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code 
aren't even written with a complete C compiler or libraries.


Doesn't matter. It's clear that DVD player software is written 
by cowboy programmers who likely believe that it's fine to 
continue running a program after it has entered an invalid 
state, presumably to avoid annoying customers.


Newer DVD/Bluray players have an ethernet port on the back. I'd 
never connect such a P.O.S. malware incubator to my LAN.


It does matter, I've programmed on embedded systems where the 
filename length was limited to 10 or so characters. There were 
all kinds of restrictions, how do you know when you have to power 
cycle that isn't an assert being hit and having the powercycle is 
the result of a hardware limitation that these "cowboy 
programmers" had no control over ? You are making a lot of wild 
assumptions to try and prove a point, and that all bugs can be 
solved with asserts (which they can't). Hey guys race conditions 
aren't a problem, just use an assert, mission fucking 
accomplished.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread tide via Digitalmars-d
On Saturday, 1 September 2018 at 05:53:12 UTC, rikki cattermole 
wrote:

On 01/09/2018 12:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black 
screen and not be able to close it.


I've had that problem with every **DVD player** I've had in 
the last 20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code 
aren't even written with a complete C compiler or libraries.


And yet they manage to run a JVM with Java on it.


Not the one's Walter is talking about. I rarely have to power 
cycle any smart device, even my phone which is running so much 
shit on it.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 3:49 AM, Dennis wrote:

On Friday, 31 August 2018 at 22:23:09 UTC, Walter Bright wrote:

For example, in any CS program, are there any courses at all about this?


In Year 1 Q4 of my Bachelor CS, there was a course "Software Testing and Quality 
Engineering" which covered things like test types (unit, end-to-end, smoke  
etc.), code coverage and design by contract. They taught how to implement 
invariants, preconditions and postconditions in Java by manually placing asserts 
(since unlike D, there's no `in`, `out` or `invariant` keywords in Java) but I 
don't recall anything related to recovery from errors, or using aviation safety 
principles to make a safe system from unreliable parts. They said that you can 
decide between security and performance when choosing to leave asserts on/off in 
release builds.


Sigh.

It's not just the software industry. Industry outside of aerospace appears to be 
ignorant of it. See the Deepwater Horizon, Fukushima, medical devices, Toyota 
car computers, it just goes on and on.


One of my favorite examples is when the power failed in New Orleans during a 
storm, and the city flooded. Guess where the backup generators were located? In 
the basements! The flooding took them out. Only one building had power after the 
disaster, and they'd located the emergency generator above sea level.


Only one did that.

The backups were destroyed by the same situation that caused the need for the 
backups - flooding from power failure. Completely worthless design, because the 
systems were coupled.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 9/1/2018 2:15 AM, Paulo Pinto wrote:

On Saturday, 1 September 2018 at 08:19:49 UTC, Walter Bright wrote:

On 8/31/2018 11:59 PM, Paulo Pinto wrote:

For example, in any CS program, are there any courses at all about this?

Yes, we had them on my degree,


I'm curious how the courses you took compared with the articles I wrote about 
it.


I will read the articles later, but as overview, we learned about:
[...]

It appears to have nothing related to what the articles are about.

I'm rather sad that I've never seen these ideas outside of the aerospace 
industry. Added to that is all the pushback on them I get here, on reddit, and 
on hackernews.


I see the results all the time. Like when someone can use a radio to hack into a 
car computer via the keyless door locks, and take over the steering and braking 
system. Even worse, when engineers discuss that, it never occurs to them that 
critical systems should be electrically separate from the infotainment system 
and anything that receives wireless data. They just talk about "fixing the bug".


BTW, people who run spy networks figured this out long ago. It's all divided up 
by "need to know" rules and cells. Any spy captured and tortured can only give 
up minimal information, not something that will allow the enemy to roll up the 
entire network.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Dennis via Digitalmars-d
On Saturday, 1 September 2018 at 08:18:03 UTC, Walter Bright 
wrote:

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure 
the variable for a title bar is the correct color? Just how 
many asserts are you going to have in your real-time game that 
can be expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

(...)

Apply common sense and assert on unreasonable results, because 
your code is broken.


Your examples are valid, but I just wanted to say the game 
development is a bit of a 'special' case in software engineering 
because there is often no clear 'correct' output for an input. 
The desired output is simply one that makes the players happy, 
which is subjective.


Take for example a 3D game where a car gets stuck between 
bollards and launches into the air. The problem is that real-time 
physics engines work in discrete steps and try to solve 
constraints by adding force to push overlapping bodies away from 
each other. When a rigid body gets stuck inside another rigid 
body, the force it generates can be comically large. This problem 
is well known but a 'proper' fix is not easy, it's often solved 
by designing the geometry so that cars can't get stuck like that. 
Would an `assert(car.yVelocity < 200 m/s)` that causes the game 
to crash when that happens really make the game better? Many 
players actually enjoy such glitches. They don't like when their 
character randomly clips through the floor and falls into a void 
however. But how would you assert that doesn't happen? There's no 
formal definition for it.


By the way, I'm sure the way the Unity game engine treats asserts 
will make you cry:
"A failure of an assertion method does not break the control flow 
of the execution. On a failure, an assertion message is logged 
(LogType.Assert) and the execution continues."[1]


That's in debug builds, in release builds they are removed 
completely. So my `Assert.IsNotNull()` fails but the assert 
message quickly gets buried under a slew of errors. Note that the 
game keeps running, the 'C# script component' of the entity in 
question just ceases to work. "The show must go on!"


[1] 
https://docs.unity3d.com/ScriptReference/Assertions.Assert.html


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 2:19:07 AM MDT Kagamin via Digitalmars-d 
wrote:
> On Friday, 31 August 2018 at 21:09:21 UTC, H. S. Teoh wrote:
> >> Some countries do have engineering certifications and
> >> professional permits for software engineering, but its still a
> >> minority.
> >
> > [...]
> >
> > It's precisely for this reason that the title "software
> > engineer" makes me cringe on the one hand, and snicker on the
> > other hand.  I honestly cannot keep a straight face when using
> > the word "engineering" to describe what a typical programmer
> > does in the industry these days.
>
> Huh? I'm pretty sure they mean it was a management decision. Why
> do you blame engineers for doing what they are asked to do?
> Making them write code properly is as simple as paying for
> exactly that.

I think that his point was more that it's sometimes argued that software
engineering really isn't engineering in the classical sense. If you're
talking about someone like a civil engineer for instance, the engineer
applies well-known and established principles to everything they do in a
disciplined way. The engineering aspects of civil engineering aren't
subjective at all. They're completely based in the physical sciences.
Software engineering on the other hand isn't based on the physical sciences
at all, and there really isn't general agreement on what good software
engineering principles are. There are aspects of it that are very much like
engineering and others that are very much subjective.

Someone else could probably explain it better than I could, but based on
some definitions of engineering, software engineering definitely doesn't
count, but it _does_ have aspects of engineering, so it can be argued either
way. Wikipedia even has a "controversy" section on its page for software
engineering that talks briefly about it:

https://en.wikipedia.org/wiki/Software_engineering

I'm not sure that I really agree that software engineering isn't
engineering, but the folks who argue against it do have a point in that
software engineering is definitely not like most other engineering
disciplines, and good engineering practices are nowhere near as well-defined
in software engineering as those in other engineering fields.

Issues with management cause other problems on top of all of that, but even
if you have a group of software engineers doing their absolute best to
follow good software engineering principles without any kind of management
interference, what they're doing is still very different from most
engineering disciplines, and it likely wouldn't be hard for another group of
competent software engineers to make solid arguments about why the good
software engineering practices that they're following actually aren't all
that good.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread Jonathan M Davis via Digitalmars-d
On Saturday, September 1, 2018 1:59:27 AM MDT Walter Bright via Digitalmars-
d wrote:
> On 8/31/2018 5:40 PM, tide wrote:
> > On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:
> >> On 8/31/2018 2:40 PM, tide wrote:
> >>> I don't think I've ever had a **game** hung up in a black screen and
> >>> not be able to close it.
> >>
> >> I've had that problem with every **DVD player** I've had in the last 20
> >> years. Power cycling is the only fix.
> >
> > Two very different things, odds are your DVD players code aren't even
> > written with a complete C compiler or libraries.
>
> Doesn't matter. It's clear that DVD player software is written by cowboy
> programmers who likely believe that it's fine to continue running a
> program after it has entered an invalid state, presumably to avoid
> annoying customers.
>
> Newer DVD/Bluray players have an ethernet port on the back. I'd never
> connect such a P.O.S. malware incubator to my LAN.

Unfortunately, in the case of Blu-ray players, if you don't, at some point,
you likely won't be able to play newer Blu-rays, because they keep updating
the DRM on the discs, requiring updates to the players - which is annoying
enough on its own, but when you consider that if it weren't for the DRM,
there wouldn't be any reason to hook up a Blu-ray player to a network, the
fact that the DRM essentially requires it is that much more egregious. But
sadly, increasingly, they seem to want you to hook up all of your stray
electronics to the network, which is anything but safe. Fortunately, not all
of them actually require it (e.g. I've never hooked up my TV to any network,
and I see no value in anything it runs that would require it), but too many
do.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-09-01 Thread Dennis via Digitalmars-d

On Friday, 31 August 2018 at 22:23:09 UTC, Walter Bright wrote:
For example, in any CS program, are there any courses at all 
about this?


In Year 1 Q4 of my Bachelor CS, there was a course "Software 
Testing and Quality Engineering" which covered things like test 
types (unit, end-to-end, smoke  etc.), code coverage and design 
by contract. They taught how to implement invariants, 
preconditions and postconditions in Java by manually placing 
asserts (since unlike D, there's no `in`, `out` or `invariant` 
keywords in Java) but I don't recall anything related to recovery 
from errors, or using aviation safety principles to make a safe 
system from unreliable parts. They said that you can decide 
between security and performance when choosing to leave asserts 
on/off in release builds.


In Year 2 Q1 there was a follow-up "Software Engineering Methods" 
course which talked about Design Patterns (the GoF ones), process 
(SCRUM / Agile), and designing (with UML and other graphs). No 
other courses since then talked about software engineering, they 
were more focused on specific fields (big data, signal 
processing, embedded systems) and fundamental computer science 
(algorithms, complexity theory, programming languages).


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Shachar Shemesh via Digitalmars-d

On 31/08/18 23:22, Steven Schveighoffer wrote:

On 8/31/18 3:50 PM, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps going 
in prod. Each time we want to verify a runtime assumption, we decide 
which type of assert to use. We prefer `assertAndContinue` (and I push 
for it in code review),"


e.g. D's assert. Well, actually, D doesn't log an error in production.

-Steve


I think it's the music of the thing rather than the thing itself.

Mecca has ASSERT, which is a condition always checked and that always 
crashes the program if it fails, and DBG_ASSERT, which, like D's built 
in assert, is skipped in release mode (essentially, an assert where you 
can log what went wrong without using the GC needing format).


When you compare this to what Walter was quoting, you get the same end 
result, but a vastly different intention. It's one thing to say "this 
ASSERT is cheap enough to be tested in production, while this DBG_ASSERT 
one is optimized out". It's another to say "well, in production we want 
to keep going no matter what, so we'll just ignore the asserts".


Shachar


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Paulo Pinto via Digitalmars-d
On Saturday, 1 September 2018 at 08:19:49 UTC, Walter Bright 
wrote:

On 8/31/2018 11:59 PM, Paulo Pinto wrote:
For example, in any CS program, are there any courses at all 
about this?

Yes, we had them on my degree,


I'm curious how the courses you took compared with the articles 
I wrote about it.


I will read the articles later, but as overview, we learned about:

- requirements gathering
- architecture design based on several methodologies, back then 
it was Booch, ER, Coad and the newly introduced UML

- introduction to way to manage source control, rcs and cvs
- design by contract, via Eiffel
- use of abstract data types as means to achieve clean 
architectures with separation of concerns
- ways to perform testing, way before unit testing was even a 
thing
- we had a semester project going from requirements gathering, 
design and implementation that was validated how well it achieved 
those goals.


Which on my year it was the implementation of subway ticketing 
system with automatic control of access doors, just the software 
part. The sensors and door controls were simulated.


And the implementation of a general purpose B+ tree library, 
another semester long project, whose integration tests where not 
available to us.


We had one week time to make our library could link and 
successfully execute all the tests that the teacher came up with 
for the acceptance testing phase. We did not have access to their 
source code, we just provided our lib.a.









Re: This thread on Hacker News terrifies me

2018-09-01 Thread Kagamin via Digitalmars-d

On Friday, 31 August 2018 at 21:09:21 UTC, H. S. Teoh wrote:
Some countries do have engineering certifications and 
professional permits for software engineering, but its still a 
minority.

[...]

It's precisely for this reason that the title "software 
engineer" makes me cringe on the one hand, and snicker on the 
other hand.  I honestly cannot keep a straight face when using 
the word "engineering" to describe what a typical programmer 
does in the industry these days.


Huh? I'm pretty sure they mean it was a management decision. Why 
do you blame engineers for doing what they are asked to do? 
Making them write code properly is as simple as paying for 
exactly that.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 8/31/2018 7:28 PM, tide wrote:
I'm just wondering but how would you code an assert to ensure the variable for a 
title bar is the correct color? Just how many asserts are you going to have in 
your real-time game that can be expected to run at 144+ fps ?


Experience will guide you on where to put the asserts.

But really, just apply common sense. It's not just for software. If you're a 
physicist, and your calculations come up with a negative mass, you screwed up. 
If you're a mechanical engineer, and calculate a force of billion pounds from 
dropping a piano, you screwed up. If you're an accountant, and calculate that 
you owe a million dollars in taxes on a thousand dollars of income, you screwed 
up. If you build a diagnostic X-ray machine, and the control software computes a 
lethal dose to administer, you screwed up.


Apply common sense and assert on unreasonable results, because your code is 
broken.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 8/31/2018 11:59 PM, Paulo Pinto wrote:

For example, in any CS program, are there any courses at all about this?

Yes, we had them on my degree,


I'm curious how the courses you took compared with the articles I wrote about 
it.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 8/31/2018 5:47 PM, tide wrote:
I've already read them before. Why don't you explain what is wrong with it 
rather than posting articles.


Because the articles explain the issues at length. Explaining why your proposal 
is deeply flawed was the entire purpose I wrote them.



You are just taking one line comments without even 
thinking about the context.


We can start with the observation that a fly-by-wire is not a fundamentally 
different system than a fully powered hydraulic system or even a pilot muscle 
cable system, when we're talking about safety principles.


There's nothing magic about software. It's just more complicated (a fact that 
makes it even MORE important to adhere to sound principles, not throw them out 
the window).




Re: This thread on Hacker News terrifies me

2018-09-01 Thread Walter Bright via Digitalmars-d

On 8/31/2018 5:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black screen and not be 
able to close it.


I've had that problem with every **DVD player** I've had in the last 20 years. 
Power cycling is the only fix.


Two very different things, odds are your DVD players code aren't even written 
with a complete C compiler or libraries.


Doesn't matter. It's clear that DVD player software is written by cowboy 
programmers who likely believe that it's fine to continue running a program 
after it has entered an invalid state, presumably to avoid annoying customers.


Newer DVD/Bluray players have an ethernet port on the back. I'd never connect 
such a P.O.S. malware incubator to my LAN.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Paulo Pinto via Digitalmars-d
On Saturday, 1 September 2018 at 05:57:06 UTC, rikki cattermole 
wrote:

It all comes down to, not enough time to cover the material.

Programming is the largest scientific field in existence. It 
has merged material from Physics, Chemistry, Psychology (in a 
BIG WAY), Biology, you name it and that ignores Mathematics.


Three to four years is just scratching the surface of what is 
needed to be known. There is simply no way to ignore that fact.


In many European countries it is 5 years for an engineering 
degree and 3 for polytechnic.


Then you had 2 years for Msc and up to 3 for Phd.

As of the EU normalization (Bologna), the 5 year degrees became a 
Msc one, because there were a few countries were a degree would 
take 3 years.


So on the countries where an engineering degree used to take 5 
years, the universities started pushing for what they call 
"integrated Msc", meaning the same contents before Bologna, but 
with Msc title at the end.


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Paulo Pinto via Digitalmars-d

On Friday, 31 August 2018 at 22:23:09 UTC, Walter Bright wrote:

On 8/31/2018 1:42 PM, Paulo Pinto wrote:
Some countries do have engineering certifications and 
professional permits for software engineering, but its still a 
minority.


That won't fix anything, because there is NO conventional 
wisdom in software engineering for how to deal with program 
bugs. I suspect I am the first to try to apply principles from 
aerospace to general engineering (including software).


For example, in any CS program, are there any courses at all 
about this?


Yes, we had them on my degree, given that Portugal is one of 
those countries with engineering certifications for software 
engineering.


Certified degrees must comply with an expected level of quality, 
otherwise they aren't legally allowed to call themselves 
engineering degrees in informatics.


Now, what we don't have is the requirement of professional permit 
for everyone, unless one wants to make use of the title in public 
events or has the legal responsibility to sign off projects.


I expect other countries with engineering colleges to have 
similar situations.


--
Paulo


Re: This thread on Hacker News terrifies me

2018-09-01 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 31 August 2018 at 23:20:08 UTC, H. S. Teoh wrote:
The problem is that there is a disconnect between academia and 
the industry.


No, there isn't. Plenty of research is focused on software 
engineering and software process improvement. Those are separate 
research branches.


The problem is that most people in forums don't have a basic 
grasp of how improbable it is for complex systems to be bug free.


After taking a course where you formally prove a program to be 
bug free fix that problem real fast.


Also, asserts doesn't prevent bugs, they can trip hours or days 
after the problem arose. The problem doesn't have to be the code 
either, it could be the data-structure, how different systems 
interact etc. So, just running the buggy software is problematic, 
even before the assert trips. (E.g. your plane could be in flames 
before the assert trips.)


Also, having a shutdown because a search function sometimes 
return the wrong result is just as unacceptable as getting late 
to work because you couldn't drive your car because it "was 
asserted" that the clutch was not in perfect condition.


The goal in academia is to produce new research, to find 
ground-breaking new theories that bring a lot of recognition


Most research does not focus on ground-breaking theories, but 
marginal improvments on existing knowledge.


 > continue existing. As a result, there is heavy focus on the
theoretical concepts, which are the basis for further research, 
rather than pragmatic tedium like how to debug a program.


All formalized knowledge that is reusable is theory. Everything 
you can learn from books is based on theory, unless you only read 
stories and distill knowledge from them with no help from the 
author.


There are of course people who build theory on how to structure, 
test and debug programs.



The goal in the industry is to produce working software.


NO, the goal is to make MONEY. If that means shipping bad 
software and getting to charge large amounts of money for 
consulting then that is what the industry will go for.



using the latest and greatest theoretical concepts.  So they 
don't really care how good a grasp you have on computability 
theory, but they *do* care a lot that you know how to debug a 
program so that it can be shipped on time. (They don't care 
*how* you debug the program, just that you know how to to it, 
and do it efficiently.)


They don't care if it works well, they don't care if it is slow 
and buggy, they care about how the people who pay for it 
respond...


A consequence of this disconnect is that the incentives are set 
up all wrong.


Yes, money does not produce quality products, unless the majority 
of customers are knowledgable and demanding.


Professors are paid to publish research papers, not to teach 
students.


I think people publish most when they try to become professors. 
After getting there the main task is to teach others the topic 
and help others to do research (e.g. supervising).


 > research.  After all, it's the research that will win you the
grants, not the fact that you won teaching awards 3 years in a 
row, or that your students wrote a glowing review of your 
lectures. So the quality of teaching already suffers.


The have a salary, they don't need a grant for themselves. They 
need a grant to get a salary to fund ph.d. students. E.g. they 
need grants to teach how to do research.


Then the course material itself is geared toward producing more 
researchers, rather than industry workers.


Most of them become industry workers...

Is it any wonder at all that the quality of the resulting 
software is so bad?


The current shipping software is much better than the software 
that was shipped in the 80s and 90s. To a large extent thanks to 
more and more software being developed in high level languages 
using well tested libraries and frameworks (not C, C++ etc).


Most cars on the road are somewhat buggy, and they could crash as 
a result of those bugs. Most programs are somewhat buggy, and 
they could crash as a result of those bugs.


Most of the buggy air fighters are on the ground, but would you 
want you car to be unusable 20% of the time?


Would you want Google to disappear 20% of the time because the 
ranking of search results is worse than what the spec says it 
should be?


As usual, these discussions aren't really based on a good 
theoretical understanding of what a bug is.


Try to prove one of your programs formally correct, then you'll 
realize that it is unattainable for most useful programs that 
retain state.


The human body is full of bugs. Hell, we rely on them, even on 
the cell-level. We need those parasites to exist. As programs get 
larger you need to focus on a different strategy to get a 
reliable system. (E.g. actor based thinking).




Re: This thread on Hacker News terrifies me

2018-09-01 Thread Ola Fosheim Grøstad via Digitalmars-d
On Saturday, 1 September 2018 at 05:51:10 UTC, rikki cattermole 
wrote:
Then there are polytechnics which I went to for my degree, 
where the focus was squarely on Industry and not on academia at 
all.


But the teaching is based on research in a good engineering 
school...


But in saying that, we had third year students starting out not 
understanding how cli arguments work so...


Proper software engineering really takes 5+ years just to get 
started, 10+ to become actually good at it. Sadly that won't be 
acceptable in our current society.


The root cause of bad software is that many programmers don't 
even have an education in CS or software engineering, or didn't 
do a good job while getting it!


Another problem is that departments get funding based on how many 
students they have and many students are not fit to be 
programmers. Then you have the recruitment process and people in 
management without a proper theoretical understanding of the 
topic looking for "practical programmers" (must have experience 
with framework X) which basically means that they get programmers 
with low theoretical understanding and therefore fail to build an 
environment where people can learn... So building a good team 
where people can become experts (based on actual research) is 
mostly not happening. It becomes experience based and the 
experience is that it isn't broken if customers are willing to 
pay.


Basic capitalism. Happens outside programming too. Make 
good-looking shit that breaks after the warranty is void.


Anyway, Software Engineering most certainly is a research 
discipline separate from CS and there is research and theory for 
developing software at different cost levels.


Games are not bug free because that would be extremely expensive, 
and cause massive delays in shipping which makes it impossible to 
plan marketing. Games are less buggy when they reuse existing 
frameworks, but that makes for less exciting designs.




Re: This thread on Hacker News terrifies me

2018-09-01 Thread rikki cattermole via Digitalmars-d

It all comes down to, not enough time to cover the material.

Programming is the largest scientific field in existence. It has merged 
material from Physics, Chemistry, Psychology (in a BIG WAY), Biology, 
you name it and that ignores Mathematics.


Three to four years is just scratching the surface of what is needed to 
be known. There is simply no way to ignore that fact.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread rikki cattermole via Digitalmars-d
Then there are polytechnics which I went to for my degree, where the 
focus was squarely on Industry and not on academia at all.


But in saying that, we had third year students starting out not 
understanding how cli arguments work so...


Proper software engineering really takes 5+ years just to get started, 
10+ to become actually good at it. Sadly that won't be acceptable in our 
current society.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread rikki cattermole via Digitalmars-d

On 01/09/2018 12:40 PM, tide wrote:

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black screen and 
not be able to close it.


I've had that problem with every **DVD player** I've had in the last 
20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code aren't even 
written with a complete C compiler or libraries.


And yet they manage to run a JVM with Java on it.



Re: This thread on Hacker News terrifies me

2018-08-31 Thread tide via Digitalmars-d

On Friday, 31 August 2018 at 22:05:18 UTC, H. S. Teoh wrote:
On Fri, Aug 31, 2018 at 09:40:50PM +, tide via 
Digitalmars-d wrote:

On Friday, 31 August 2018 at 21:31:02 UTC, 0xEAB wrote:

[...]
> Furthermore, how often have we cursed about games that hung 
> up with a blackscreen and didn't let us close them by any 
> mean other than logging off? If they just crashed, we'd not 
> have run into such problems.


That's assuming an assert catches every error. Not all bugs 
are going to be caught by an assert. I don't think I've ever 
had a game hung up in a black screen and not be able to close 
it.


I have, and that's only one of the better possible scenarios.  
I've had games get into a bad state, which becomes obvious as 
visual glitches, and then proceed to silently and subtly 
corrupt the save file so that on next startup all progress is 
lost.


Had the darned thing aborted at the first visual glitch or 
unexpected internal state, instead of blindly barging on 
pretending that visual glitches are not a real problem, the 
save file might have still been salvageable.


(Yes, visual glitches, in and of themselves, aren't a big deal.
 Most people may not even notice them.  But when they happen 
unexpectedly, they can be a symptom of a deeper, far more 
serious problem. Just like an assert detecting that some 
variable isn't the expected value. Maybe the variable isn't 
even anything important; maybe it just controls the color of 
the title bar or something equally irrelevant. But it could be 
a sign that there's been a memory corruption.  It could be a 
sign that the program is in the middle of being exploited by a 
security hack. The unexpected value in the variable isn't 
merely an irrelevant thing that we can safely ignore; it could 
be circumstantial evidence of something far more serious.  
Continuing to barrel forward in spite of clear evidence 
pointing to a problem is utterly foolish.)



T


I'm just wondering but how would you code an assert to ensure the 
variable for a title bar is the correct color? Just how many 
asserts are you going to have in your real-time game that can be 
expected to run at 144+ fps ?


Re: This thread on Hacker News terrifies me

2018-08-31 Thread tide via Digitalmars-d

On Friday, 31 August 2018 at 22:27:47 UTC, Walter Bright wrote:

On 8/31/2018 2:21 PM, tide wrote:

On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:
"Stopping all executing may not be the correct 'safe state' 
for an airplane though!"
Depends on the aircraft and how it is implemented. If you have 
a plane that is fly by wire, and you stop all executing then 
even the pilot no longer has control of the plane anymore, 
which would be very bad.


I can't even read the rest of posting after this.

Please read the following articles, then come back.

Assertions in Production Code
https://www.digitalmars.com/articles/b14.html

Safe Systems from Unreliable Parts
https://www.digitalmars.com/articles/b39.html

Designing Safe Software Systems Part 2
https://www.digitalmars.com/articles/b40.html


I've already read them before. Why don't you explain what is 
wrong with it rather than posting articles. You are just taking 
one line comments without even thinking about the context.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread tide via Digitalmars-d

On Friday, 31 August 2018 at 22:42:39 UTC, Walter Bright wrote:

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a **game** hung up in a black 
screen and not be able to close it.


I've had that problem with every **DVD player** I've had in the 
last 20 years. Power cycling is the only fix.


Two very different things, odds are your DVD players code aren't 
even written with a complete C compiler or libraries.




Re: This thread on Hacker News terrifies me

2018-08-31 Thread RhyS via Digitalmars-d

On Friday, 31 August 2018 at 23:47:40 UTC, Jonathan M Davis wrote:
The are plenty of cases where the teachers actually do an 
excellent job teaching the material that the courses cover. 
It's just that the material is often about theoretical computer 
science - and this is actually stuff that can be very 
beneficial to becoming an excellent programmer. However, many 
teachers really aren't great programmers. They aren't 
necessarily bad programmers, but unless they spent a bunch of 
time in industry before teaching, odds are that they don't have 
all of the software engineering skills that the students are 
going to need once they get into the field. And most courses 
aren't designed to teach students the practical skills.


Imagine getting a Industrial Engineer that switched over to 
become a teacher, showing up to teach Visual Basic and Database 
normalization.


That same teacher used one of those red little VB books for his 
study material ( been 20+ years ago, not sure if they still exist 
).


The students ended up helping each other fixing issues because 
the teacher was useless. It was even so bad, he took examples out 
of the book, little bit reformatted the questions and used that 
for tests. Of course one of our fellow students figure this out, 
he got his hands on the book and voila, every test answer.


Worst was that the teacher was so BORING, that nobody wanted to 
sit in the front of the class, because his database class was so 
bad, people literally needed to keep each other up from falling 
asleep. I am not joking!


You can guess that i never learned database normalization in 
school and ended up doing it on my own. So yea, first hand 
experience with bad teachers.


The guy that did C++ was way better and you actually learned from 
him. A bit scary guy but knew his stuff. Teachers make all the 
difference for children/students to learn anything.


If they are boring, burned out or just doing it to earn money, it 
shows in the lacking responds and score of the students. A good 
teacher draws in the students and helps them focus. A good 
teacher does not make things over complicated and does not assume 
that everybody understand. Seen brilliant teachers that are 
horrible at teaching because they are so smart ( or repeated the 
same material so much ), they assume everybody understands 
everything as they do.


Teacher make all the difference in teaching but a lot is so 
politicizes/internal power games, ... that good teachers tend to 
leave and bad ones just end up doing the job, because its a good 
government/public servant job with nice vacation perks ( here in 
the EU ).


Re: This thread on Hacker News terrifies me

2018-08-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 31, 2018 at 05:47:40PM -0600, Jonathan M Davis via Digitalmars-d 
wrote:
[...]
> The school I went to (Cal Poly, San Luis Obispo) at least tries to
> focus on the practical side of things (their motto is "learn by
> doing"), and when I went there, they even specifically had a Software
> Engineering degree where you had to take a year-long course where you
> did a project in a team for a company. But at least at the time, the
> big difference between the SE and CS degrees was that they required
> more classes with group work and fewer theoretical classes, and there
> certainly weren't any classes on something like debugging. The
> software engineering-centric classes focused more on a combination of
> teaching stuff like classic design patterns and then having you do
> projects in groups. And that was helpful, but it still didn't really
> prepare you for what you were going to be doing in your full-time job.
> It's still better than what a lot of schools do though. I'm frequently
> shocked by how little many CS graduates know when they first get out
> of school.
[...]

I suppose it depends on the school.  And yes, I've seen CS graduates who
have literally never written a program longer than 1 page.  I cannot
imagine what kind of shock they must have felt upon getting a job in the
industry and being faced, on their first day, with a 2 million LOC
codebase riddled with hacks, last-minute-rushed fixes, legacy code that
nobody understands anymore, inconsistent / non-existent documentation,
and being tasked with fixing a bug of unclear description and unknown
root cause which could be literally *anywhere* in those 2 million LOC.

I was lucky that I was spared of most of the shock due to having spent a
lot of time working on personal projects while at school, and thereby
picking up many practical debugging skills.


T

-- 
Real men don't take backups. They put their source on a public FTP-server and 
let the world mirror it. -- Linus Torvalds


Re: This thread on Hacker News terrifies me

2018-08-31 Thread Jonathan M Davis via Digitalmars-d
On Friday, August 31, 2018 5:20:08 PM MDT H. S. Teoh via Digitalmars-d 
wrote:
> A consequence of this disconnect is that the incentives are set up all
> wrong.  Professors are paid to publish research papers, not to teach
> students.  Teaching is often viewed as an undesired additional burden
> you're obligated to carry out, a chore that you just want to get over
> with in the fastest, easiest possible way, so that you can go back to
> doing research.  After all, it's the research that will win you the
> grants, not the fact that you won teaching awards 3 years in a row, or
> that your students wrote a glowing review of your lectures. So the
> quality of teaching already suffers.

The are plenty of cases where the teachers actually do an excellent job
teaching the material that the courses cover. It's just that the material is
often about theoretical computer science - and this is actually stuff that
can be very beneficial to becoming an excellent programmer. However, many
teachers really aren't great programmers. They aren't necessarily bad
programmers, but unless they spent a bunch of time in industry before
teaching, odds are that they don't have all of the software engineering
skills that the students are going to need once they get into the field. And
most courses aren't designed to teach students the practical skills. How
that goes exactly depends on the school, with some schools actually trying
to integrate software engineering stuff, but many really don't. So, even if
the schools do an excellent job teaching what they're trying to teach, it
still tends to be on the theoretical side of things. But that may be
improving. Still, the theoretical side is something that programmers should
be learning. It's just that it isn't enough on its own, and it serves more
as a good foundation than as the set of skills that you're going to be using
directly on the job on a day to day basis.

The school I went to (Cal Poly, San Luis Obispo) at least tries to focus on
the practical side of things (their motto is "learn by doing"), and when I
went there, they even specifically had a Software Engineering degree where
you had to take a year-long course where you did a project in a team for a
company. But at least at the time, the big difference between the SE and CS
degrees was that they required more classes with group work and fewer
theoretical classes, and there certainly weren't any classes on something
like debugging. The software engineering-centric classes focused more on a
combination of teaching stuff like classic design patterns and then having
you do projects in groups. And that was helpful, but it still didn't really
prepare you for what you were going to be doing in your full-time job. It's
still better than what a lot of schools do though. I'm frequently shocked by
how little many CS graduates know when they first get out of school.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-08-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 31, 2018 at 04:45:57PM -0600, Jonathan M Davis via Digitalmars-d 
wrote:
> On Friday, August 31, 2018 4:23:09 PM MDT Walter Bright via Digitalmars-d 
> wrote:
[...]
> > That won't fix anything, because there is NO conventional wisdom in
> > software engineering for how to deal with program bugs. I suspect I
> > am the first to try to apply principles from aerospace to general
> > engineering (including software).
> >
> > For example, in any CS program, are there any courses at all about
> > this?
> 
> There are probably some somewhere, but most CS programs really aren't
> about writing good software or even being a software engineer. Some
> definitely try to bring some focus on that, but it's far, far more
> common that the focus is on computer science concepts and not on
> software engineering. A good CS program gives you a lot of theory, but
> they're rarely big on the practical side of things.
[...]
> It's often pretty scary how poor the average programer is, and in my
> experience, when trying to hire someone, you can end up going through
> a lot of really bad candidates before finding someone even passable
> let alone good.
[...]

The problem is that there is a disconnect between academia and the
industry.

The goal in academia is to produce new research, to find ground-breaking
new theories that bring a lot of recognition and fame to the institution
when published. It's the research that will bring in the grants and
enable the institution to continue existing. As a result, there is heavy
focus on the theoretical concepts, which are the basis for further
research, rather than pragmatic tedium like how to debug a program.

The goal in the industry is to produce working software. The industry
doesn't really care if you discovered an amazing new way of thinking
about software; they want to actually produce software that can be
shipped to the customer, even if it isn't using the latest and greatest
theoretical concepts.  So they don't really care how good a grasp you
have on computability theory, but they *do* care a lot that you know how
to debug a program so that it can be shipped on time. (They don't care
*how* you debug the program, just that you know how to to it, and do it
efficiently.)

A consequence of this disconnect is that the incentives are set up all
wrong.  Professors are paid to publish research papers, not to teach
students.  Teaching is often viewed as an undesired additional burden
you're obligated to carry out, a chore that you just want to get over
with in the fastest, easiest possible way, so that you can go back to
doing research.  After all, it's the research that will win you the
grants, not the fact that you won teaching awards 3 years in a row, or
that your students wrote a glowing review of your lectures. So the
quality of teaching already suffers.

Then the course material itself is geared toward producing more
researchers, rather than industry workers.  After all, the institution
needs to replace aging and retiring faculty members in order to continue
existing.  To do that, it needs to produce students who can become
future researchers.  And since research depends on theory, theory is
emphasized, and pragmatism like debugging skills aren't really relevant.

On the other side of the disconnect, the industry expects that students
graduating from prestigious CS programs ought to know how to program.
The hiring manager sees a degree from MIT or Stanford and is immediately
impressed; but he doesn't have the technical expertise to realize what
that degree actually means: that the student excelled in the primarily
theory-focused program, which may or may not translate to practical
skills like programming and debugging ability that would make them
productive in the industry.  All the hiring manager knows is that
institution Y is renowned for their CS program, and therefore he gives
more weight to candidates who hold a degree from Y, even if that
actually doesn't guarantee that the candidate will be a good programmer.
As a result, the software department is filled up with people who are
good at theory, but whose practical programming skills can range
anywhere from passably good to practically non-existent.  And now these
people with wide-ranging programming abilities have to work together as
a team.

Is it any wonder at all that the quality of the resulting software is so
bad?


T

-- 
Amateurs built the Ark; professionals built the Titanic.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread Jonathan M Davis via Digitalmars-d
On Friday, August 31, 2018 4:23:09 PM MDT Walter Bright via Digitalmars-d 
wrote:
> On 8/31/2018 1:42 PM, Paulo Pinto wrote:
> > Some countries do have engineering certifications and professional
> > permits for software engineering, but its still a minority.
>
> That won't fix anything, because there is NO conventional wisdom in
> software engineering for how to deal with program bugs. I suspect I am
> the first to try to apply principles from aerospace to general
> engineering (including software).
>
> For example, in any CS program, are there any courses at all about this?

There are probably some somewhere, but most CS programs really aren't about
writing good software or even being a software engineer. Some definitely try
to bring some focus on that, but it's far, far more common that the focus is
on computer science concepts and not on software engineering. A good CS
program gives you a lot of theory, but they're rarely big on the practical
side of things. I think that it's a rarity for programmers to graduate
college with a good understanding of how to be a good software engineer in
the field. That's the sort of thing that they're much more likely to learn
on the job, and plenty of jobs don't do a good job with it. Motivated
programmers can certainly find resources for learning good software
engineering skills and/or find mentors who can help them learn them - and
many programmers do - but it's _very_ easy to learn enough programming to
get a job and get by without being very good at it. And if a programmer
isn't really motivated to improve themselves, it's highly unlikely that
they're going to have good software engineering skills. It's often pretty
scary how poor the average programer is, and in my experience, when trying
to hire someone, you can end up going through a lot of really bad candidates
before finding someone even passable let alone good.

- Jonathan M Davis





Re: This thread on Hacker News terrifies me

2018-08-31 Thread Walter Bright via Digitalmars-d

On 8/31/2018 2:40 PM, tide wrote:
I don't think I've ever had a game hung up in a black 
screen and not be able to close it.


I've had that problem with every DVD player I've had in the last 20 years. Power 
cycling is the only fix.




Re: This thread on Hacker News terrifies me

2018-08-31 Thread Walter Bright via Digitalmars-d

On 8/31/2018 2:21 PM, tide wrote:

On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:
"Stopping all executing may not be the correct 'safe state' for an airplane 
though!"
Depends on the aircraft and how it is implemented. If you have a plane that is 
fly by wire, and you stop all executing then even the pilot no longer has 
control of the plane anymore, which would be very bad.


I can't even read the rest of posting after this.

Please read the following articles, then come back.

Assertions in Production Code
https://www.digitalmars.com/articles/b14.html

Safe Systems from Unreliable Parts
https://www.digitalmars.com/articles/b39.html

Designing Safe Software Systems Part 2
https://www.digitalmars.com/articles/b40.html



Re: This thread on Hacker News terrifies me

2018-08-31 Thread Walter Bright via Digitalmars-d

On 8/31/2018 1:42 PM, Paulo Pinto wrote:
Some countries do have engineering certifications and professional permits for 
software engineering, but its still a minority.


That won't fix anything, because there is NO conventional wisdom in software 
engineering for how to deal with program bugs. I suspect I am the first to try 
to apply principles from aerospace to general engineering (including software).


For example, in any CS program, are there any courses at all about this?


Re: This thread on Hacker News terrifies me

2018-08-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 31, 2018 at 09:40:50PM +, tide via Digitalmars-d wrote:
> On Friday, 31 August 2018 at 21:31:02 UTC, 0xEAB wrote:
[...]
> > Furthermore, how often have we cursed about games that hung up with
> > a blackscreen and didn't let us close them by any mean other than
> > logging off? If they just crashed, we'd not have run into such
> > problems.
> 
> That's assuming an assert catches every error. Not all bugs are going
> to be caught by an assert. I don't think I've ever had a game hung up
> in a black screen and not be able to close it.

I have, and that's only one of the better possible scenarios.  I've had
games get into a bad state, which becomes obvious as visual glitches,
and then proceed to silently and subtly corrupt the save file so that on
next startup all progress is lost.

Had the darned thing aborted at the first visual glitch or unexpected
internal state, instead of blindly barging on pretending that visual
glitches are not a real problem, the save file might have still been
salvageable.

(Yes, visual glitches, in and of themselves, aren't a big deal.  Most
people may not even notice them.  But when they happen unexpectedly,
they can be a symptom of a deeper, far more serious problem. Just like
an assert detecting that some variable isn't the expected value. Maybe
the variable isn't even anything important; maybe it just controls the
color of the title bar or something equally irrelevant. But it could be
a sign that there's been a memory corruption.  It could be a sign that
the program is in the middle of being exploited by a security hack. The
unexpected value in the variable isn't merely an irrelevant thing that
we can safely ignore; it could be circumstantial evidence of something
far more serious.  Continuing to barrel forward in spite of clear
evidence pointing to a problem is utterly foolish.)


T

-- 
Latin's a dead language, as dead as can be; it killed off all the Romans, and 
now it's killing me! -- Schoolboy


Re: This thread on Hacker News terrifies me

2018-08-31 Thread 0xEAB via Digitalmars-d

On Friday, 31 August 2018 at 21:40:50 UTC, tide wrote:
The asserts being there still cause slow downs in things that 
would otherwise not be slow. Like how D does assert checks for 
indices.


After the bug is fixed and the app is debugged, there's no need 
to keep those assertions.
The release switch will do the job. Anyway, what's that got to do 
with the topic of this thread?



That's assuming an assert catches every error. Not all bugs are 
going to be caught by an assert.


not really, at least that's not what I meant. (well, in theory 
one could write enough assertions to detect any error, but... 
nvm, I agree with you). What I meant were apps misbehaving 
because of obviously ignored errors.



I don't think I've ever had a game hung up in a black screen 
and not be able to close it.


Lucky one :)


Re: This thread on Hacker News terrifies me

2018-08-31 Thread tide via Digitalmars-d

On Friday, 31 August 2018 at 21:31:02 UTC, 0xEAB wrote:

On Friday, 31 August 2018 at 21:21:16 UTC, tide wrote:
Depends on the software being developed, for a game? Stopping 
at every assert would be madness. Let a lone having an over 
ubundance of asserts. Can't even imagine how many asserts 
there would be in for something like a matrix multiplication.


If one is aware that something is asserting quite often, why 
don't they just fix the bug that causes that assertion to fail?


The asserts being there still cause slow downs in things that 
would otherwise not be slow. Like how D does assert checks for 
indices.


Furthermore, how often have we cursed about games that hung up 
with a blackscreen and didn't let us close them by any mean 
other than logging off? If they just crashed, we'd not have run 
into such problems.


That's assuming an assert catches every error. Not all bugs are 
going to be caught by an assert. I don't think I've ever had a 
game hung up in a black screen and not be able to close it.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread 0xEAB via Digitalmars-d

On Friday, 31 August 2018 at 21:21:16 UTC, tide wrote:
Depends on the software being developed, for a game? Stopping 
at every assert would be madness. Let a lone having an over 
ubundance of asserts. Can't even imagine how many asserts there 
would be in for something like a matrix multiplication.


If one is aware that something is asserting quite often, why 
don't they just fix the bug that causes that assertion to fail?


Furthermore, how often have we cursed about games that hung up 
with a blackscreen and didn't let us close them by any mean other 
than logging off? If they just crashed, we'd not have run into 
such problems.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread tide via Digitalmars-d

On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"Stopping all executing may not be the correct 'safe state' for 
an airplane though!"


Depends on the aircraft and how it is implemented. If you have a 
plane that is fly by wire, and you stop all executing then even 
the pilot no longer has control of the plane anymore, which would 
be very bad.



"One faction believed you should never intentionally crash the 
app"


"One place I worked had a team that was very adamant about not 
really having much error checking. Not much of any qc process, 
either. Wait for someone to complain about bad data and 
respond. Honestly, this worked really well for small, 
skunkworks type projects that needed to be nimble."


And on and on. It's unbelievable. The conventional wisdom in 
software for how to deal with programming bugs simply does not 
exist.


Depends on the software being developed, for a game? Stopping at 
every assert would be madness. Let a lone having an over 
ubundance of asserts. Can't even imagine how many asserts there 
would be in for something like a matrix multiplication. An 
operation that would otherwise be branchless having numerous 
branches for all the index checks that would be done. Twice per 
scalar value access. And so on and so on.



Here's the same topic on Reddit with the same awful ideas:

https://www.reddit.com/r/programming/comments/9bl72d/assertions_in_production_code/

No wonder that DVD players still hang when you insert a DVD 
with a scratch on it, and I've had a lot of DVD and Bluray 
players over the last 20 years. No wonder that malware is 
everywhere.


TIL people still use DVD players all the while my desktops and 
laptops from the last 7+ years have not even had an optical drive.




Re: This thread on Hacker News terrifies me

2018-08-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 31, 2018 at 08:42:38PM +, Paulo Pinto via Digitalmars-d wrote:
> On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:
> > https://news.ycombinator.com/item?id=17880722
[...]
> > And on and on. It's unbelievable. The conventional wisdom in
> > software for how to deal with programming bugs simply does not
> > exist.
> > 
> > Here's the same topic on Reddit with the same awful ideas:
[...]
> Some countries do have engineering certifications and professional
> permits for software engineering, but its still a minority.
[...]

It's precisely for this reason that the title "software engineer" makes
me cringe on the one hand, and snicker on the other hand.  I honestly
cannot keep a straight face when using the word "engineering" to
describe what a typical programmer does in the industry these days.

Where are the procedures, documentations, verification processes,
safeguards, certifications, culpability, etc., etc., that make
engineering the respected profession that it is?  They are essentially
absent in typical software development environments, or only poorly aped
in the most laughable ways.  Most "enterprise" software has no proper
design document at all; what little documentation does exist is merely a
lip-service shoddy hack-job done after the fact to justify the cowboying
that has gone on before.  It's an embarrassment to call this
"engineering", and a shame to real engineers who have actual engineering
procedures to follow.

Until the software industry gets its act together and become a real,
respectable engineering field, we will continue to suffer from
unreliable software that malfunctions, eats your data, and crashes on
unusual inputs for no good reason other than that it was never properly
engineered.  And malware and catastrophic security breaches will
continue to run rampant in spite of millions and billions of dollars
being poured into improving security every year.  And of course, more
and more of modern life is becoming dependent on devices controlled by
software of such calibre (IoT... *shudder*).  It's a miracle that
society hasn't collapsed yet!


T

-- 
There are three kinds of people in the world: those who can count, and those 
who can't.


Re: This thread on Hacker News terrifies me

2018-08-31 Thread Paulo Pinto via Digitalmars-d

On Friday, 31 August 2018 at 19:50:20 UTC, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps 
going in prod. Each time we want to verify a runtime 
assumption, we decide which type of assert to use. We prefer 
`assertAndContinue` (and I push for it in code review),"


"Stopping all executing may not be the correct 'safe state' for 
an airplane though!"


"One faction believed you should never intentionally crash the 
app"


"One place I worked had a team that was very adamant about not 
really having much error checking. Not much of any qc process, 
either. Wait for someone to complain about bad data and 
respond. Honestly, this worked really well for small, 
skunkworks type projects that needed to be nimble."


And on and on. It's unbelievable. The conventional wisdom in 
software for how to deal with programming bugs simply does not 
exist.


Here's the same topic on Reddit with the same awful ideas:

https://www.reddit.com/r/programming/comments/9bl72d/assertions_in_production_code/

No wonder that DVD players still hang when you insert a DVD 
with a scratch on it, and I've had a lot of DVD and Bluray 
players over the last 20 years. No wonder that malware is 
everywhere.


You would probably enjoy this talk.

"Hayley Denbraver We Are 3000 Years Behind: Let's Talk About 
Engineering Ethics"


https://www.youtube.com/watch?v=jUSJePqplDA

I think that until lawsuits and software refunds due to 
malfunctions escalate to a critical level, the situation will 
hardly change.


Some countries do have engineering certifications and 
professional permits for software engineering, but its still a 
minority.


--
Paulo



Re: This thread on Hacker News terrifies me

2018-08-31 Thread Steven Schveighoffer via Digitalmars-d

On 8/31/18 3:50 PM, Walter Bright wrote:

https://news.ycombinator.com/item?id=17880722

Typical comments:

"`assertAndContinue` crashes in dev and logs an error and keeps going in 
prod. Each time we want to verify a runtime assumption, we decide which 
type of assert to use. We prefer `assertAndContinue` (and I push for it 
in code review),"


e.g. D's assert. Well, actually, D doesn't log an error in production.

-Steve


  1   2   >