Thank You, everybody, for
the kind and thorough answers.

Date: Tue, 16 May 2017 18:15:56 -0400
From: Richard Hipp <d...@sqlite.org>
Subject: Re: [fossil-users] Bug Report or a Feature Request: Cope with
Hosting Providers' Watchdogs
>...
> What operation is it trying to do that takes more than 10 seconds?
> Usually Fossil runs for more like 10 milliseconds.
> ...
> Building big ZIP archives or tarballs takes long.  (FWIW, I have
> profiled those operations, and Fossil is spending most of its time
> inside of zlib, doing the requirement compression.)  Big "annotates"
> can also take some time if you have a long history.  Do you know what
> operations are timing out?
>...

I took the path of least resistance and switched from the
public URL of my repository to the ssh-protocol based access.
I was stupid enough to not to come to that idea before.
My hosting provider

    (They do not have any English pages, because they
     target only the local, Estonian, market.)
    https://www.veebimajutus.ee/

confirmed that the maximum age limit of their
public web query servicing processes is 10 minutes, but
without doing any measurements I suspect that I would have
needed even more than that. In the case of my hosting provider
the processes that are started from the SSH console
do not have the time-to-live limit, so switching
from https to SSH solved my problem, except that now
I ran into a different difficulty during the cloning of a repository:

    ----console--session--excerpt--start---------
    Round-trips: 448   Artifacts sent: 0  received: 251146
    Clone done, sent: 141797  received: 5866110695  ip: 185.7.252.74
    Rebuilding repository meta-data...
      100.0% complete...
    Extra delta compression...
    Vacuuming the database...
    SQLITE_FULL: statement aborts at 9: [INSERT INTO vacuum_db.'blob'
SELECT*FROM"repository".'blob'] database or disk is full
    SQLITE_FULL: statement aborts at 1: [VACUUM] database or disk is full
    fossil: database or disk is full: {VACUUM}
    ----console--session--excerpt--end---------

I suspect that the failure mechanism here is that the
cloning somehow uses the

    /tmp

at the client side and if the HDD partition that contains the
/tmp is "too full", the cloning fails. I found out from

    fossil-2.2/src/sqlite3.c

line 35265 that there's an opportunity to use the SQLITE_TMPDIR
environment variable, which I did and that solved my problem,
I eventually managed to clone my repository without problems.

I suggest that the Fossil code might be updated
by including a test that checks the size of the repository
at the server side and then checks the free space at the
partition that includes the path described at the SQLITE_TMPDIR
or if the SQLITE_TMPDIR is not set, then the check looks at the the
available free space at the partition that includes the /tmp.
If there's not enough free space, the fossil should exit with an error.
The exiting should be done before downloading
any artifacts and the stderr should include a message
that hints that the problem might be solved by
giving SQLITE_TMPDIR a value that refers a folder
that resides at a partition with at least
<required amount of free space + a little extra> MiB of free space.

Thank You, everybody, for Your answers and help and
thank You for reading my comment/letter.

Regards,
martin.v...@softf1.cm


_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to