Erik Steffl <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>...
Ryo Furue wrote:
[. . .]
That's a good question. In fact, most modern Unix/Linux systems allow you to use as large stacksize as you like, roughly speaking. Although the default stakesize limit is often set to a very low value, say, 2MB, each user can raise the limit, unless the system administrator (root) prohibits it. So, no. That feature in question is very useful.
Some people believe that the stacksize limit should be kept low (in other words, that you shouldn't use much stack), but, as far as I know, nobody seems to be able to give a convincing argument for that position. I suspect that the posision is just an inertia, a remnant of an old habit. But, I'm ready to be convinced otherwise if somebody has a good argument.
stack is something you do not have control over and you have no information about, it's not good to use it for large data - you cannot figure out how big it is, how much you're using, there's nothing you can do if space cannot be allocated etc. (maybe you can figure some of it out but that would be compiler specific, I think)
on the other hand when using malloc/free or new/delete you can do _something_ if there's not enough memory.
Right. But, that depends on your design goal. There are many applications where the heap is the right choise for large data. But, there are others where the stack is better.
First of all, there are many cases where the heap is the only choise, as when your function needs to return the pointer to the space allocated within it. So, the real choise arises in cases like this:
int f() { //. . . double workspace0[N]; // stack double* workspace1 = new double[N]; // heap; Which is better? //. . . delete[] workspace1; }
As you say, if there's _somthing_ to do when there is not enough memory, the heap is the right choice. That "something" may be, and in many cases, is, just to print an error message and quit.
If, on the other hand, you don't care if your program crashes in case of memory shortage, the stack is the better choice. One obvious reason is that the stack is the faster. Another is that you can forget about 'delete'ing the allocated space; deallocation is automatic for the stack. Many scientific applications are like this: "If it doesn't run, it doesn't. That's fine. Use a larger computer."
My point is that it's really is a choice. The fact is that in majority of cases the heap is the better choise. But, I don't see any inherent reason why you shouldn't use the stack for large data. I said the
what? I just provided you with bunch of reasons. And you provided another one (you can get around the problem you described by having the variable declared in highest level function where it's used, in your example above the workspace would be allocated in whoever calls f and would be passed as an argument to f). In a particular situation these might not outweight whatever advantage you get for using stack but that does not mean there are no reasons.
stack-is-not-for-large-data is "inertia", but it may be better called a myth.
By the way, I don't know how hard to keep track of stack use. But, the stack is released when the function is finished. Isn't it, then, harder to keep track of heap use? I've heard a lot about "memory leak", which refers to forgetting to 'delete' allocated heap memory. I'm yet to hear "stack leak" :) Of course, I'm joking. Please don't take this seriously.
of course there are reasons to use one or another, I am not advocating heap over stack, you asked for sme reasons why one should be careful about using stack (you wrote: ...shouldn't use much stack ... nobody seems to be able to give a convincing argument ...) so I provided you with some reasons. Obviously, a general rule like that does not apply in all situations...
erik
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

