On 2008-10-03, at 02:47, Joshua Juran wrote:
I used to have what-to-do-on-close logic in the kernel, but now the window will always close when the last file descriptor referring to it is closed. So run a trivial program that does "while ( true ) pause();" to keep it open.

I don't need to do that to keep a file in the file system from being deleted, why should I have to do that when I just want to redirect the output of "ls" to a new window?

Pipes and filters and redirection is a human tool, at least as much as a scripting tool. The UNIX shell is a very humanistic API, it's something created for people to use, directly, casually, if you have to do things like writing a program to keep a file open, you might as well be using JCL.

Also, rather than opening the new-window device yourself (when in the shell), it's advised to use a dedicated program that handles setting up a new session. If the window isn't the controlling terminal of its foreground process group's session, then clicking the close box does nothing (i.e. the processes holding its file descriptor don't receive a SIGHUP).

The whole "session", "process group" nonsense is irrelevant for this.

This is a little window, it's not a terminal, it's not a session, it's "ls > /dev/win/anon/default".

If you close it before the program's finished, the program gets a SIGPIPE, not a SIGHUP.

Are people crazy for building little trains that can't carry passengers or cargo, can't go anywhere, and don't make any money?

No, but they're crazy if they think balsawood is "more solid" than steel.

Well, Apple was going to provide real OS features (protected memory, premptive multitasking, etc.) in Copland, but that broke too much backward compatibility so it was canceled and replaced by Mac OS X.

It broke too much backwards compatibility because the original design was broken that it was inherently unfixable without breaking it, in which case you wouldn't have the same OS any more. Which is what finally happened.

Microsoft will have to do the same thing, eventually, with Internet Explorer and the HTML control and "security zones".

Mac OS 8 ran faster under Sheepshaver than native.


Your complaint says that one machine is faster than a different one,

No, this was on the same computer. Same disk, same memory, same CPU, the only difference was booting from the BeOS partition or the Mac OS partition. Because BeOS didn't have to run disk I/O and the application in lockstep.

I can't help but wonder if you just made this up. Benchmarks or it didn't happen.

Don't be daft. If I was going to lie about that pile of fungus food, I could make up benchmarks too.

Mac OS 9 was slower on a 233 MHZ 604e than NeXTstep on a 40 MHz 68030.

Again, please provide benchmarks.

This is "hates-software", not Ars Technica. I hate classic Mac OS with a burning hate, for all the time I wasted trying to make it work even vaguely reliably. For all the mollycoddling I had to do to keep it from ripping its belly open and wandering around bleating about the way its hooves were digging into its trailing intestines.

I don't even have the NeXT any more, alas.

I can provide anecdotes all day, to explain why I hate it.

Just running Finder and ANY program on my 7600, switching from a Finder window to another window took longer to complete than running the NeXT file manager and any program under NeXTStep.

I upgraded my 7600 from 9.22 to 10.1.5 and all of a sudden I could play music while using the file system without having dropouts. Yes. Really. Just using the file system from another program, any other program, would bring the Great Multitasking Charade to a screeching halt and the music ended.

Moving windows was annoying, at first. No hardware acceleration. Once I found out how to turn off the window shadows, it was even fast doing plain old window manipulations.

I got a G3 processor and upgraded to 10.2, and it was even better. I dropped back to 9.22 on the same machine, and it STILL dropped music in iTunes when I tried to do file transfers in the background.

I *couldn't* make this stuff up.

But there's more!

On my Amiga, in 1989, I frequently ran a video game in the background, for background music, while I was working on the source code for that game, compiling, editing, and chatting on a dialup session with the other developer, without any worries, without even the slightest glitch in the music, without any delays I could tell in the compile process. I did benchmark that one time... running a pretty intensive MIDI app added 3% to the total time from "make" to another prompt... and the compiler didn't cause any delays in the MIDI pipeline. That's on a 7.14 MHz 68000 with 2.5 MB of RAM.

Yet a 400 MHz G3 with 768MB of RAM got dropouts in iTunes just copying files?

And you think it's *solid*?

AmigaOS wasn't what I'd call *solid*, and it was the Rock of Gibraltar by comparison.

Please imagine an operating system that looks and feels like classic Mac OS, but without the faults.

I can't. I literally can't. I wasted too much time trying in vain to hide the faults from myself, to work around them, to avoid triggering them without *completely* giving up every expectation I had for what a computer should be able to do. I still have occasional use for it, but I can't imagine using it for anything more than the driver for my lovely old HP scanner.

Start over from scratch. Throw away the horrible 1982-era OS-as-a-GUI- library legacy. Yes, I could see doing that. I could see making something that looked like Mac OS, but I sure wouldn't want to make it feel like Mac OS, because so much of the feel of Mac OS, to me, is bound up in "oh no, you can't do that, because I'm just a Mac".

Reply via email to