On Thu, Jun 19, 2014 at 12:21 PM, Ivan Zhakov <i...@visualsvn.com> wrote:

> Hi,
>
> I've performed several FSFS performace tests using latest Subversion
> from trunk@r1602928.
>
> Please find results bellow. I'm also attaching results as a table in
> pdf for easier reading.
>
> Environment:
> Windows Server 2012 R2 (x64)
> 2 GB RAM, with 2 virtual processors hosted on Macbook with SSD disk
> Subversion 1.9.0-dev from trunk@r1602928
> Test data: Ruby project repository (46054 revisions, 400mb-500mb size on
> disk)
>
> All tests are performed using default options.
>
> Tests performed:
> 1. svnadmin load -- loading first 5000 revisions into repository
> 2. svnadmin dump -- dumping all repository revions to NUL
> 3. svn log http:// -- svn log over http:// protocol to NUL
> 4. svn log file:// -- svn log over file:// protocol to NUL
> 5. svn export http:// -- svn export over http:// protocol to local disk
> 6. svn export file:// -- svn export over file:// protocol to local disk
>
> Subversion 1.9.0-dev, fsfs6 unpacked repository
> ===============================================
>
..

> svn export http://   7.336      7.757    7.437
> svn export file://   4.151      4.854    4.310
>

Did you actually run the export on the repository root?
If so, those numbers are plainly impossible. Here is why:
         54,641 directories
        512,580 files
  4,558,565,078 bytes in files
      3,077,645 properties
     45,809,954 bytes in properties

There is simply no way that httpd would deliver that amount
of data in about 4 seconds! To do this requires hot caches and
20 .. 30 GHz of CPU power being used by > 10 concurrent
connections.

Also, you are basically saying that reading many small files
is more efficient than reading the same amount of user data
combined into a few larger ones. The on-disk data size alone
should make packed repos faster.

-- Stefan^2.

Reply via email to