DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUGĀ·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://issues.apache.org/bugzilla/show_bug.cgi?id=39807>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED ANDĀ·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=39807

           Summary: large files / filesystem corruption can cause apache2 to
                    eat up all available memory
           Product: Apache httpd-2
           Version: 2.2.2
          Platform: PC
        OS/Version: Linux
            Status: NEW
          Severity: major
          Priority: P2
         Component: Core
        AssignedTo: [email protected]
        ReportedBy: [EMAIL PROTECTED]


I recently had some filesystem corruption that resulted in the file size of a
.GIF  to be reported as 65,536 terabytes, even though it's only 782 bytes:

-rw-r--r-- 1 kirin kirin               656 Mar 21  2005 btn_profile_on.gif
-rw-r--r-- 1 kirin kirin               866 Mar 21  2005 btn_register.gif
-rw-r--r-- 1 kirin kirin               772 Mar 21  2005 btn_register_on.gif
-rw-r--r-- 1 kirin kirin 72057594037928718 Mar 21  2005 btn_search.gif
-rw-r--r-- 1 kirin kirin               709 Mar 21  2005 btn_search_on.gif
-rw-r--r-- 1 kirin kirin              1045 Mar 21  2005 btn_users.gif
-rw-r--r-- 1 kirin kirin               915 Mar 21  2005 btn_users_on.gif

# cp btn_search.gif btn_search2.gif
# ls -l btn_search2.gif

-rw-r--r-- 1 root root 782 Jun 13 15:19 btn_search2.gif


When apache tries to serve the corrupt file, it enters into a loop in
default_handler that causes it to use up all available memory.

) bt
#0  apr_bucket_alloc (size=52, list=0x8210390)
    at buckets/apr_buckets_alloc.c:136
#1  0xa7f8c790 in apr_bucket_simple_copy (a=0x8210670, b=0xaf9e12f0)
    at buckets/apr_buckets_simple.c:22
#2  0xa7f8a36c in apr_bucket_shared_copy (a=0x8210670, b=0xaf9e12f0)
    at buckets/apr_buckets_refcount.c:38
#3  0x0806d811 in default_handler (r=0x8320320) at core.c:3678
#4  0x080740f7 in ap_run_handler (r=0x8320320) at config.c:157
#5  0x080771e1 in ap_invoke_handler (r=0x8320320) at config.c:371
#6  0x08081c48 in ap_process_request (r=0x8320320) at http_request.c:258
#7  0x0807eeee in ap_process_http_connection (c=0x820c550) at http_core.c:172
#8  0x0807ae77 in ap_run_process_connection (c=0x820c550) at connection.c:43
#9  0x08085c24 in child_main (child_num_arg=<value optimized out>)
    at prefork.c:640
#10 0x08085f1a in make_child (s=<value optimized out>, slot=0) at prefork.c:736
#11 0x08085fda in startup_children (number_to_start=5) at prefork.c:754
#12 0x08086a44 in ap_mpm_run (_pconf=0x80a70a8, plog=0x80d5160, s=0x80a8f48)
    at prefork.c:975
#13 0x08061dcf in main (argc=134893856, argv=0x8153390) at main.c:717

I'm not sure if httpd would go into such a tailspin if the file *was* actually
that huge or not, but it would be nice to see it be more resilient to large
files and filesystem corruption.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to