Hello,
In the C language, the result of a buffer overflow is supposed to be an undefined behavior. On occasion, the program might crash. Other times, it might simply not produce the results you expected it to. And a lot of times, in simple programs, nothing will happen (at least noticable)....
One of the reasons for the undefined behavior might be the memory "map" of a typical program. As far as a compiled C program, there are generally four distinct regions of memory that get utilized. The first region holds the executable code (code segment). The second one is where global variables are stored. The other two are the stack and the heap.....
Per the latest example below, there are three global variables... The memory is likely being contiguouslly allocated between the three, so when you overflow one buffer its just spilling over to the memory allocated for the other two variables....
Actually, that may explain the output being seen. If you were to move the "printf("string::%s::\n", s);" underneath the first strncpy, you'd likely see the first 12 or so characters from Some_String[] inside of s[]; which makes more sense, but in its current incarnation, the second strncpy looks like its filling up string[] and then naturally overflowing into s[]. That's why s[] no longer contains the first several characters of Some_String.... In this case, since the memory is "there" and not being critically used for something, no segfault occurs.
I hope that makes sense....
Regards,
Dennis K. Paulsen
"Ted Kat." <[EMAIL PROTECTED]> wrote:
--- Brian Rise <[EMAIL PROTECTED]>wrote:
> I don't want to sound like a retard but I probably will anyway. But
> in
> this example isn't it syslog that has the vulnerability to the buffer
> overflow? And if so wouldn't they have probably fixed that code since
> '95? Could explain why it won't segfault on the newer distros.
Your right! Well technicly..it would be in the syslog function where
the segfault would happen but unless syslog() is printing the array
based on sizeof(array) instead of the terminating NULL at the end of
the string it should still segfault or at least thats what I was told
when I was just a little hacker. For fun I tried this little pgm,
#include
#include
char Some_String[] = "Happy is not that happy unless your Happy!";
char string[12];
char s[12];
int main(void) {
strncpy( s, Some_String, sizeof(s) );
strncpy( string, Some_String, sizeof(Some_String) );
printf("string::%s::\n", string);
printf("string::%s::\n", s);
return(0);
}
If compiled and ran you think this "should" tank with a segfault, but
no. It actually outputs
string::Happy is not that happy unless your Happy!::
string:: that happy unless your Happy!::
What I'm beginning to understand that when your program starts up,
memory greater than what you specify in C is allocated for use. And
only
when you over step this memory into the area where system calls occur
a segfault will happen.
Like
/* segfaults for me */
#include
#include
int cnt;
char string[12];
int main(void) { /* \/ this number maybe different for other
machines*/
for( cnt = 0; cnt < 2608; cnt++){ string[cnt] = 'A'; }
printf("string::%s::\n", string);
return(0);
}
I think that it segfaulted back in `95 because linux or what ever *nix
that it was tested on didn't do this extra mem alocation.
Can anyone confirm this train of thought?
=====
Ted Katseres
----------------
------------------------
--------------------------------
__________________________________
Do you Yahoo!?
Yahoo! Mail SpamGuard - Read only the mail you want.
http://antispam.yahoo.com/tools
Do you Yahoo!?
Yahoo! Mail SpamGuard - Read only the mail you want.
