GitHub user eiri opened a pull request:

    https://github.com/apache/couchdb-couch/pull/173

    Prevent reading beyond end of file and add config parameter to limit maxium 
pread size

    When a database file goes corrupt it is possible that pread will receive 
bogus length and place a large part of the file in memory before couch_file 
process will crash. Depending on a size of the file this could bring the whole 
node down.
    
    This change prevents pread from reading beyond end of a file and adding 
configuration parameter `max_pread_size` to give a method to address a 
situation when corrupted read does not go beyond eof, but still makes starting 
node crash on a large read.
    
    Two stats counters been added to report both exceptions.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cloudant/couchdb-couch 
65287-add-max-pread-size-limit

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/couchdb-couch/pull/173.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #173
    
----
commit 89990e1800934823b83341152d2a103cd4bcad8b
Author: Eric Avdey <[email protected]>
Date:   2016-05-16T13:15:38Z

    Raise exception on attempt of reading beyound end of file

commit 8ea500ef413d09f862609d34bdd8ac6737cd26a3
Author: Eric Avdey <[email protected]>
Date:   2016-05-16T16:55:52Z

    Implement config parameter max_pread_size

commit 824af52d2aa543204aeb0bd5a1b90264f02e55a9
Author: Eric Avdey <[email protected]>
Date:   2016-05-16T19:47:36Z

    Add stats counters for exceed_eof and exceed_limit

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to