Christopher Smith wrote:
Cool. So, like, how does C++ recover from a random pointer being invoked
as an object with a message call, after you've already freed that
memory?  Assuming it doesn't dump core, that is?

a) by allowing you to avoid using random pointers

Of course any language *allows* you to write correct code. How does C++ *recover* from using a random invalid pointer?

b) by allowing you to override new and delete operators to ensure you
don't do a double free

That's quite a trick, I'd think. I'm not sure if I pass you (say) an uninitialized pointer how the C++ dispatch engine would know which delete operator to invoke. Maybe you can clarify that for me.

I'm also not sure how if you allocate a piece of memory, duplicate the pointer to it, free one of those pointers, allocate another object that happens to land in the same place, and then reference the pointer you didn't free, I'm not sure how you would catch that with any sort of efficiency.

Now, granted, I once improved a Pascal compiler by implementing a testing mode that made sure that every pointer reference was to a valid block of memory, but it involved a counter in every pointer, a counter in every allocated object, and running the list of all allocated objects on every pointer reference. For teaching Pascal, it was handy. For doing any sort of serious programming, it was rather inefficient. I'm not sure it would get along with C's versions of pointers, with the pointer arithmetic and all, either.

c) by allowing you to install your own heap

I'm not sure how that helps with trying to invoke methods on an uninitialized pointer with some left-over random value in it. I imagine you could build a class that represents a pointer that's never uninitialized, but that doesn't keep some other piece of C++ from clobbering it by mistake. It also isn't obvious to me how you'd prevent the problem of calling delete on an array, delete[] on a non-array, or delete[] on something other than the zero'th element of the array, but I'm not familiar enough with the innermost details of that sort of thing to say it's categorically impossible. I'll just note that I've never heard of such a library, and I've asked around, so I'm guessing it's infeasible for some reason to do this.

d) by allowing you to use a platform or compiler that traps when this
happens, say by installing a signal handler for SEGV.

Assuming it is actually an invalid operation that violates segmentation, yes. That's the *good* version. The bad version they call a "code injection security flaw". ;-)

I think if it was as easy as you're making it out to be, you'd see people hosting C++ applets on web pages and coding servers in C++ that never had buffer overflow problems. I suspect you'd also see such things in the STL that would actually work and solve problems, rather than papering over them - See my "graph" question.

Hmm.. maybe you mean something different by "machine languages" than I
do, but if a machine accepts any arbitrary sequences of bits, there
really isn't any safety coming from it.

No, I'm using "safe" in a different manner than you are. I'll grant that in a case where literally anything the machine can do might happen, it's not a very useful version of "safe". Which was part of my point. At the bottom of the stack, a machine does what it does, regardless of how you think of it abstractly.

I believe your point is that if everyone would just always do bounds
checks and core dump when they exceeded their buffer sizes, we wouldn't
have viruses floating around out there.

You don't have to dump core when you get a bounds check. See java applets, javascript, etc. I don't really want my multi-tab web browser with integrated email client dumping core because I went to a web page where some bozo passed me invalid javascript.

No, it shows a small subset of what you need to know to debug it.

Usually not.

and you require that your unexpected error hasn't also screwed up your
database and/or mail access, right?
Nope. But if it hasn't (and it almost never does), then I get better
information.

So, what happens when the database and/or mail don't work? Do you
recover and go about your business or do you dump core?

I log it into the file system. Then I recover. I've never had the system so hosed that I couldn't dump it onto the file system. A different processes comes along and picks it up later, just like your coredump thing.

So far, it's about 30/30/30 this time around as to whether I've misconfigured something (like the address of the web server or the mail server's upstream outgoing), the code that sends it out has some (increasingly subtle) flaw in it (which also gets logged, incidentally), or the receiving machine is in the process of getting upgraded or something.

I'll occasionally shut down the DB to install a new schema, forgetting that I have jobs running in the background, and wind up with a mess of mail messages when I turn things back on. Or I'll start a reinstall of the code on the master machine and have other machines that are logging messages to it start complaining they can't connect, or vice versa.

Those are expected errors, tho, so it doesn't really say much.

Safe languages *don't* change unexpected errors in to expected errors.
As you said yourself, they just remove the possibility of undefined
code. Whether an error was expected or not is largely a function of a
programmer's noggin.

Right. But if one of your expected errors is "something in my code threw an exception it shouldn't have", then you've turned all unexpected errors into expected errors. But now we're just arguing over the words.

--
  Darren New / San Diego, CA, USA (PST)
    His kernel fu is strong.
    He studied at the Shao Linux Temple.

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to