On Thu, 2002-10-31 at 16:30, Chris Mason wrote:
> On Thu, 2002-10-31 at 16:08, JP Howard wrote:
> > On 31 Oct 2002 15:38:19 -0500, "Chris Mason" <[EMAIL PROTECTED]> said:
> > <...>
> > > The idea is that during boundless operations (creating a hole, and
> > > truncates), the journal code wasn't properly reserving log blocks. 
> > <...>
> > 
> > Chris, what can trigger this situation? We're currently running
> > data=journal on 2.4.20pre in production--are we at risk?
> > 
> 
> This bug is pretty hard to hit.  It has been in every single version of
> journaling reiserfs, including 2.2.x.  So far, we've gotten two reports
> of it in about 3 years (oddly, both were this month).
> 
> What can trigger it?  I honestly haven't been able to force the problem
> to happen, it should require a very high load of processes doing
> deletions (or hole creations), along with a very high system load in
> general.
> 
> The logging code padds all the reservations for space in the log, making
> it very hard to hit the hard limit of 1024 blocks per transactions.
> 
> Both sites that have hit the bug have a very large number of files
> (millions), meaning that metadata operations will tend to log more
> blocks, making the bug more likely.
> 
> If you have less than a million files, you'll probably never be able to
> hit it.  I'm still going to try and get the fix into 2.4.20 though.
> 
> -chris
> 
> 
Off-topic, and not meaning to scold, but _why_ are you running 2.4.20pre
in a _production_ environment anyway? Just curious.

-C


Reply via email to