On Mon, Dec 22, 2014 at 1:01 PM, Ivan Zhakov <i...@visualsvn.com> wrote:

> On 17 July 2014 at 18:00,  <stef...@apache.org> wrote:
> > Author: stefan2
> > Date: Thu Jul 17 15:00:33 2014
> > New Revision: 1611380
> >
> > URL: http://svn.apache.org/r1611380
> > Log:
> > Work around for the poor performance of APR reader / writer locks on
> Windows.
> > Basically, fall back to 1.7 code on Win32 by using a simple mutex.
> >
> > This resulted in a 35% increase in throughput for a 'null-export' over
> > ra_serf from hot SVN caches with revprop caching enabled.  OTOH, this may
> > cost us some scalability with ra_svn on high-speed (>1Gb) networks.
> >
> > * subversion/libsvn_subr/cache-membuffer.c
> >   (USE_SIMPLE_MUTEX): New flag.  Set depending on whether we need to fall
> >                       back to a simple locking scheme.
> >   (svn_membuffer_t): Select lock type at compile time.
> >   (read_lock_cache,
> >    write_lock_cache,
> >    force_write_lock_cache,
> >    unlock_cache): Use the simple mutex lock when selected.
> >   (svn_cache__membuffer_cache_create): Initialize the correct lock type
> and
> >                                        disable the counter_mutex if
> redundant.
> >
> > Modified:
> >     subversion/trunk/subversion/libsvn_subr/cache-membuffer.c
> >
> [...]
> > @@ -674,6 +712,12 @@ static svn_error_t *
> >  unlock_cache(svn_membuffer_t *cache, svn_error_t *err)
> >  {
> >  #if APR_HAS_THREADS
> > +#  if USE_SIMPLE_MUTEX
> > +
> > +  return svn_mutex__unlock(cache->lock, SVN_NO_ERROR);
> > +
> > +#  else
> > +
> >    if (cache->lock)
> >    {
> >      apr_status_t status = apr_thread_rwlock_unlock(cache->lock);
> > @@ -683,6 +727,8 @@ unlock_cache(svn_membuffer_t *cache, svn
> >      if (status)
> >        return svn_error_wrap_apr(status, _("Can't unlock cache mutex"));
> >    }
> > +
> > +#  endif
> >  #endif
> >    return err;
> >  }
> The provided ERR argument is ignored in USE_SIMPLE_MUTEX codepath. So
> any error from cache serializer will be ignored and invalid result
> will be used.
>

Thanks for fixing!

-- Stefan^2.

Reply via email to