On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:
On 11/2/2014 3:44 PM, Dicebot wrote:
They have hardware protection against sharing memory between processes. It's a
reasonable level of protection.
reasonable default - yes
reasoable level of protection in general - no

No language can help when that is the requirement.

Yes, because it is property of system architecture as a whole which is exactly what I am speaking about.

It is absolutely different because of scale; having 1K of shared memory is very different from having 100Mb shared between processes including the stack and program code.

It is possible to have minimal amount shared mutable memory inside one process. There is nothing inherently blocking one to do so, same as there is nothing inherently preventing one to screw the inter-process shared memory. Being different only because of scale -> not really different.

Kernel mode code is the responsibility of the OS system, not the app.

In some (many?) large scale server systems OS is the app or at least heavily integrated. Thinking about app as a single independent user-space process is a
bit.. outdated.

Haha, I've used such a system (MSDOS) for many years. Switching to process protection was a huge advance. Sad that we're "modernizing" by reverting to such an awful programming environment.

What is huge advance for user land applciation is a problem for server code. Have you ever heard "OS is the problem, not solution" slogan that is slowly becoming more popular in high load networking world?

It is all about system design.

It's about the probability of coupling and the level of that your system can
stand. Process level protection is adequate for most things.

Again, I am fine with advocating it as a resonable default. What frustrates me is intentionally making any other design harder than it should be by explicitly allowing normal cleanup to be skipped. This behaviour is easy to achieve by installing custom assert handler (it could be generic Error handler too) but
impossible to bail out when it is the default one.

Running normal cleanup code when the program is in an undefined, possibly corrupted, state can impede proper shutdown.

Preventing cleanup can be done with roughly one line of code from user code. Enabling it back is effectively impossible. With this decision you don't trade safer default for more dangerous default - you trade configurable default for unavoidable.

To preserve same safe defaults you could define all thrown Errors to result in plain HLT / abort call with possibility to define user handler that actually throws. That would have addressed all concernc nicely while still not making life of those who want cleanup harder.

Because of abovementioned avoiding more corruption from cleanup does not sound
to me as strong enough benefit to force that on everyone.

I have considerable experience with what programs can do when continuing to run after a bug. This was on real mode DOS, which infamously does not seg fault on errors.

It's AWFUL. I've had quite enough of having to reboot the operating system after every failure, and even then that often wasn't enough because it might scramble the disk driver code so it won't even boot.

I don't argue necessity to terminate the program. I argue strict relation "program == process" which is impractical and inflexible.

It is my duty to explain how to use the features of the language correctly, including how and why they work the way they do. The how, why, and best practices are not part of a language specification.

You can't just explain things to make them magically appropriate for user domain. I fully understand how you propose to design applications. Unfortunately, it is completely unacceptable in some cases and quite inconvenient in others. Right now your proposal is effectively "design applications like me or reimplement language / library routines yourself".

NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly reliable.

That's correct, but not a justification for making it less reliable.

It is justification for making it more configurable.

If D changes assert() to do unwinding, then D will become unusable for building reliable systems until I add in yet another form of assert() that does not.

My personal perfect design would be like this:

- Exceptions work as they do now
- Errors work the same way as exceptions but don't get caught by catch(Exception) - assert does not throw Error but simply aborts the program (configurable with druntime callback)
- define "die" which is effectively "assert(false)"
- tests don't use assert

That would provide default behaviour similar to one we currently have (with all the good things) but leave much more configurable choices for system designer.

Some small chance of undefined behaviour vs 100% chance of resource leaks?

If the operating system can't handle resource recovery for a process terminating, it is an unusable operating system.

There are many unusable operating systems out there then :) And don't forget about remote network resources - while leak will eventually timeout there it will still have a negative impact.

Reply via email to