Re: [fossil-users] Bug Report or a Feature Request: Cope with, Hosting Providers' Watchdogs

2017-05-21 Thread Martin Vahi

Thank You, everybody, for
the kind and thorough answers.

Date: Tue, 16 May 2017 18:15:56 -0400
From: Richard Hipp <d...@sqlite.org>
Subject: Re: [fossil-users] Bug Report or a Feature Request: Cope with
Hosting Providers' Watchdogs
>...
> What operation is it trying to do that takes more than 10 seconds?
> Usually Fossil runs for more like 10 milliseconds.
> ...
> Building big ZIP archives or tarballs takes long.  (FWIW, I have
> profiled those operations, and Fossil is spending most of its time
> inside of zlib, doing the requirement compression.)  Big "annotates"
> can also take some time if you have a long history.  Do you know what
> operations are timing out?
>...

I took the path of least resistance and switched from the
public URL of my repository to the ssh-protocol based access.
I was stupid enough to not to come to that idea before.
My hosting provider

(They do not have any English pages, because they
 target only the local, Estonian, market.)
https://www.veebimajutus.ee/

confirmed that the maximum age limit of their
public web query servicing processes is 10 minutes, but
without doing any measurements I suspect that I would have
needed even more than that. In the case of my hosting provider
the processes that are started from the SSH console
do not have the time-to-live limit, so switching
from https to SSH solved my problem, except that now
I ran into a different difficulty during the cloning of a repository:

console--session--excerpt--start-
Round-trips: 448   Artifacts sent: 0  received: 251146
Clone done, sent: 141797  received: 5866110695  ip: 185.7.252.74
Rebuilding repository meta-data...
  100.0% complete...
Extra delta compression...
Vacuuming the database...
SQLITE_FULL: statement aborts at 9: [INSERT INTO vacuum_db.'blob'
SELECT*FROM"repository".'blob'] database or disk is full
SQLITE_FULL: statement aborts at 1: [VACUUM] database or disk is full
fossil: database or disk is full: {VACUUM}
console--session--excerpt--end-

I suspect that the failure mechanism here is that the
cloning somehow uses the

/tmp

at the client side and if the HDD partition that contains the
/tmp is "too full", the cloning fails. I found out from

fossil-2.2/src/sqlite3.c

line 35265 that there's an opportunity to use the SQLITE_TMPDIR
environment variable, which I did and that solved my problem,
I eventually managed to clone my repository without problems.

I suggest that the Fossil code might be updated
by including a test that checks the size of the repository
at the server side and then checks the free space at the
partition that includes the path described at the SQLITE_TMPDIR
or if the SQLITE_TMPDIR is not set, then the check looks at the the
available free space at the partition that includes the /tmp.
If there's not enough free space, the fossil should exit with an error.
The exiting should be done before downloading
any artifacts and the stderr should include a message
that hints that the problem might be solved by
giving SQLITE_TMPDIR a value that refers a folder
that resides at a partition with at least
 MiB of free space.

Thank You, everybody, for Your answers and help and
thank You for reading my comment/letter.

Regards,
martin.v...@softf1.cm


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Richard Hipp
On 5/16/17, Martin Vahi  wrote:
>
>
> I'm not totally sure, what the issue in my case is, but
> I suspect that the issue might be that the hosting provider
> has some time-to-live limit for every operating
> system process that is started for serving a request and
> if the Fossil takes "too long" to process the request,
> then it gets killed by the hosting provider's watchdog
> before the Fossil process completes the commit operation.
>
> Are there any "heart-beat" options available, where
> a cron job might call something like
>
> fossil --heartbeat --max-duration=10s
>
> and during that "maximum duration" time period a small
> chunk of the work gets done?

What operation is it trying to do that takes more than 10 seconds?
Usually Fossil runs for more like 10 milliseconds.

Building big ZIP archives or tarballs takes long.  (FWIW, I have
profiled those operations, and Fossil is spending most of its time
inside of zlib, doing the requirement compression.)  Big "annotates"
can also take some time if you have a long history.  Do you know what
operations are timing out?

Most operations are read-only.  So breaking them up into separate
transactions won't really help anything.  Write operations (receiving
a push) are usually very quick.
-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Andy Bradford
Thus said Martin Vahi on Tue, 16 May 2017 22:43:07 +0300:

> Are there  any "heart-beat" options  available, wher a cron  job might
> call something like

No, however, there are options to control how much Fossil will sync in a
single  round-trip.  Fossil  does synchronize  individual  artifacts  in
batches and all artifacts that  are sent/received in a single round-trip
are committed (in the RDBMS sense of  the word) to the repository, so if
your connection drops, or your hosting provider limits process time, you
don't have to  resynchronize those parts that  were already successfully
synchronized.

Some of the settings that control how  much data is sent in a round-trip
(or how much time is permitted) are:

max-download (server)
max-download-time (server)
max-upload (client)

You can get to the server  settings using the /setup_access page on your
server.

You  can get  to the  client settings  from the  command line  (or using
fossil ui on your workstation).

Of course, if the size of  your artifacts is greater than these settings
it may not  help as much. From  the output that you sent,  it looks like
maybe  your artifacts  are quite  large  so you  won't be  able to  take
advantage of these settings:

> mishoidla/sandbox_of_the_Fossil_repository$ fossil push --private
> Push to https://martin_v...@www.softf1.com/cgi-bin/tree1/
> technology/flaws/silktorrent.bash/
> Round-trips: 1   Artifacts sent: 0  received: 0
> server did not reply
> Push done, sent: 651648435  received: 12838  ip: 185.7.252.74

Andy
-- 
TAI64 timestamp: 4000591b78f8


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users