Re: [fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Richard Hipp
On 5/16/17, Martin Vahi  wrote:
>
>
> I'm not totally sure, what the issue in my case is, but
> I suspect that the issue might be that the hosting provider
> has some time-to-live limit for every operating
> system process that is started for serving a request and
> if the Fossil takes "too long" to process the request,
> then it gets killed by the hosting provider's watchdog
> before the Fossil process completes the commit operation.
>
> Are there any "heart-beat" options available, where
> a cron job might call something like
>
> fossil --heartbeat --max-duration=10s
>
> and during that "maximum duration" time period a small
> chunk of the work gets done?

What operation is it trying to do that takes more than 10 seconds?
Usually Fossil runs for more like 10 milliseconds.

Building big ZIP archives or tarballs takes long.  (FWIW, I have
profiled those operations, and Fossil is spending most of its time
inside of zlib, doing the requirement compression.)  Big "annotates"
can also take some time if you have a long history.  Do you know what
operations are timing out?

Most operations are read-only.  So breaking them up into separate
transactions won't really help anything.  Write operations (receiving
a push) are usually very quick.
-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Andy Bradford
Thus said Martin Vahi on Tue, 16 May 2017 22:43:07 +0300:

> Are there  any "heart-beat" options  available, wher a cron  job might
> call something like

No, however, there are options to control how much Fossil will sync in a
single  round-trip.  Fossil  does synchronize  individual  artifacts  in
batches and all artifacts that  are sent/received in a single round-trip
are committed (in the RDBMS sense of  the word) to the repository, so if
your connection drops, or your hosting provider limits process time, you
don't have to  resynchronize those parts that  were already successfully
synchronized.

Some of the settings that control how  much data is sent in a round-trip
(or how much time is permitted) are:

max-download (server)
max-download-time (server)
max-upload (client)

You can get to the server  settings using the /setup_access page on your
server.

You  can get  to the  client settings  from the  command line  (or using
fossil ui on your workstation).

Of course, if the size of  your artifacts is greater than these settings
it may not  help as much. From  the output that you sent,  it looks like
maybe  your artifacts  are quite  large  so you  won't be  able to  take
advantage of these settings:

> mishoidla/sandbox_of_the_Fossil_repository$ fossil push --private
> Push to https://martin_v...@www.softf1.com/cgi-bin/tree1/
> technology/flaws/silktorrent.bash/
> Round-trips: 1   Artifacts sent: 0  received: 0
> server did not reply
> Push done, sent: 651648435  received: 12838  ip: 185.7.252.74

Andy
-- 
TAI64 timestamp: 4000591b78f8


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Martin Vahi


I'm not totally sure, what the issue in my case is, but
I suspect that the issue might be that the hosting provider
has some time-to-live limit for every operating
system process that is started for serving a request and
if the Fossil takes "too long" to process the request,
then it gets killed by the hosting provider's watchdog
before the Fossil process completes the commit operation.

Are there any "heart-beat" options available, where
a cron job might call something like

fossil --heartbeat --max-duration=10s

and during that "maximum duration" time period a small
chunk of the work gets done?

console--session--excerpt--start

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil open
../repository_storage.fossil
project-name: 
repository:
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/hiljem_siiski_varundatav/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/../repository_storage.fossil
local-root:
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/hiljem_siiski_varundatav/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/
config-db:
/home/ts2/m_local/bin_p/fossil_scm_org/vd2017_04_28/.fossil
project-code: 26101fc480a34b3b993c8c83b7511840ab9d0c17
checkout: c260d3eb188e94d0e83a9807b5e7325f994956eb 2017-05-16
16:59:42 UTC
parent:   58fe99e749cc4e596ab81f7915b3b512a2d2ca17 2017-05-16
00:49:42 UTC
tags: trunk
comment:  wiki reference updates (user: martin_vahi)
check-ins:21

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil push --private
Push to
https://martin_v...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
Round-trips: 1   Artifacts sent: 0  received: 0
server did not reply
Push done, sent: 651648435  received: 12838  ip: 185.7.252.74

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil version
This is fossil version 2.2 [81d7d3f43e] 2017-04-11 20:54:55 UTC

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ uname -a
Linux linux-0fiz 3.16.7-53-desktop #1 SMP PREEMPT Fri Dec 2 13:19:28
UTC 2016 (7b4a1f9) x86_64 x86_64 x86_64 GNU/Linux

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$
console--session--excerpt--end--


Thank You for reading my comment and
thank You for the answers.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Feature Request: Commit in Chunks to use less RAM

2017-05-16 Thread Stephan Beal
On Tue, May 16, 2017 at 6:15 PM, Martin Vahi  wrote:

>
>
> ---console--session--excerpt--start
> SQLITE_NOMEM: failed to allocate 651456745 bytes of memory
> SQLITE_NOMEM: statement aborts at 22: [INSERT INTO
> blob(rcvid,size,uuid,content)VALUES(6,662828201,'
> 9b81ec309fc0c2f2278f386c8b1917359fe24bd8',:data)]
>

Even if committing in chunks were feasible (it's currently not, unless i'm
sorely mistaken), that wouldn't solve the problem of trying to commit a
650MB file, because chunks would/could presumably not span across
individual files. Checking in a new version of that file (of a similar
size) will take more than twice that amount of memory (see below).


> fossil: SQL error: out of memory
> SQLITE_ERROR: statement aborts at 1: [ROLLBACK] cannot rollback - no
> transaction is active
> ---console--session--excerpt--end--
>
> The obvious line of thought might be that
> I should just upgrade my hardware, but if there's
> no fundamental reason, why files can not be
> committed in chunks


For one, fossil's delta implementation requires that the whole file
contents be in memory (plus memory for storing the delta from the previous
version, if any). Applying a delta requires both the original version, the
delta, and the new copy (with the delta applied) to be in memory at once.
Thus it cannot be used to manage huge files on systems with really limited
memory.


> The idea is that some embedded systems have only
> 64MiB of RAM, which is actually pretty much,
> given that in the 90'ties my desktop computer
> had about 40MiB.


That was in the 90's. Nowadays web browsers regularly allocate that amount
of memory (or more) just for the JavaScript engine.

i'm not saying "go buy more hardware," but several parts of fossil's
internals require amounts of memory directly related to (but notably larger
than) the size of the biggest file being operated on.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Feature Request: Commit in Chunks to use less RAM

2017-05-16 Thread Martin Vahi


---console--session--excerpt--start
SQLITE_NOMEM: failed to allocate 651456745 bytes of memory
SQLITE_NOMEM: statement aborts at 22: [INSERT INTO
blob(rcvid,size,uuid,content)VALUES(6,662828201,'9b81ec309fc0c2f2278f386c8b1917359fe24bd8',:data)]

fossil: SQL error: out of memory
SQLITE_ERROR: statement aborts at 1: [ROLLBACK] cannot rollback - no
transaction is active
---console--session--excerpt--end--

The obvious line of thought might be that
I should just upgrade my hardware, but if there's
no fundamental reason, why files can not be
committed in chunks, then it would be nice if
the Fossil just took a look, how much RAM
is available and then use some formula to
allocate some percentage less than 100% of the
free RAM and then try to do all of the work
within that RAM. May be the default minimum
might be 5MiB (five mebibytes), but the minimum
might also be an optional console parameter.
The idea is that some embedded systems have only
64MiB of RAM, which is actually pretty much,
given that in the 90'ties my desktop computer
had about 40MiB. (I'm born in 1981, so I was
a teenager in the 90'ties.) That 40MiB allowed me
to play Wolfestein and, I do not remember exactly,
but may be even Doom_2.

The old Raspberry Pi 1, which has a single 700MHz CPU-core,
has about 400MiB of RAM (512MIB minus video memory).
The new Raspberry Pi 3, which has 4 CPU-cores, has
about 900MiB of RAM (1GiB minus video memory).
Given the "Trusted Computing" trends of the Intel and AMD CPUs,
a solution, where a private cluster of Raspberry Pi like
computers is used in stead of a single, "huge", PC,
becomes more and more relevant. There are even academic
projects that try to construct such personal work stations

https://suif.stanford.edu/collective/

The industry has been contemplating about virtual machine
based separation for quite a while:

https://www.qubes-os.org/

Although, in my view the most elegant solution
for separating and then "re-joining" different applications
on a single machine is the

https://genode.org/

Historically speaking, may be one of the first, if not the first,
cluster based personal work station idea was in the
form of the Plan_9 operating system, which seemed to
be more of a cluster based single server with terminals
than a workstation, but the general idea seems to be there.

http://www.plan9.bell-labs.com/wiki/plan9/plan_9_wiki/

That is to say, a cluster of Raspberry Pi like computers
has a long history of trial and error and even, when the
cluster has a lot of RAM, that RAM is separated to relatively
small chunks between the Raspberry Pi like computers.
The ability to either parallelize the task or to
try to get by with a small amount of RAM can be practical.


Thank You for reading my comment :-)


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] crlf-glob

2017-05-16 Thread Arjen Markus

From: fossil-users [mailto:fossil-users-boun...@lists.fossil-scm.org] On Behalf 
Of Stephan Beal
Sent: Tuesday, May 16, 2017 11:09 AM
To: Fossil SCM user's discussion
Subject: Re: [fossil-users] crlf-glob

On Tue, May 16, 2017 at 12:16 AM, Thomas  wrote:
On 2017-05-15 23:09, Warren Young wrote:
On May 15, 2017, at 3:27 PM, Thomas  wrote:

Does it really matter in the 21st century if a line is terminated by CR, LF, or 
CR/LF anymore?

Notepad.exe in Windows 10 Creator's Edition still only works properly with 
CR+LF.  Since that's the default handler for *.txt on Windows, yes, line ending 
type still matters for any cross-platform project.

So, after editing a file that belongs to your project with Notepad on Windows, 
would you expect an SCM complaining about it when you commit?

I wouldn't.


Real-life case: a developer on a banking project i worked on edited a shell 
script with notepad and checked it in (without having tested it). The SCM did 
not complain about Notepad-injected \r characters. After deployment on the live 
system this script started exiting with "bad interpreter". The reason, which we 
discovered only after opening the script in Emacs, which shows \r as ^M, was 
that notepad had mangled it, changing the first line to:

#!/bin/sh^M

The ^M (\r) is just another character for most Unix tools, and the system was 
treating the \r as part of the shell's name, which of course didn't work.


Warning: one of my favourite complaints ahead ...
Well, actually it gets worse. The CRLF is interpreted as a line separator, not 
an end-of-line marker. The last line may therefore miss that marker. So there 
is no distinction between a "complete" file and a truncated file.

And then there is the BOM, which editors on Windows sometimes insert ...

Regards,

Arjen
DISCLAIMER: This message is intended exclusively for the addressee(s) and may 
contain confidential and privileged information. If you are not the intended 
recipient please notify the sender immediately and destroy this message. 
Unauthorized use, disclosure or copying of this message is strictly prohibited. 
The foundation 'Stichting Deltares', which has its seat at Delft, The 
Netherlands, Commercial Registration Number 41146461, is not liable in any way 
whatsoever for consequences and/or damages resulting from the improper, 
incomplete and untimely dispatch, receipt and/or content of this e-mail.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] crlf-glob

2017-05-16 Thread Stephan Beal
On Tue, May 16, 2017 at 12:16 AM, Thomas  wrote:

> On 2017-05-15 23:09, Warren Young wrote:
>
>> On May 15, 2017, at 3:27 PM, Thomas  wrote:
>>
>>>
>>> Does it really matter in the 21st century if a line is terminated by CR,
>>> LF, or CR/LF anymore?
>>>
>>
>> Notepad.exe in Windows 10 Creator’s Edition still only works properly
>> with CR+LF.  Since that’s the default handler for *.txt on Windows, yes,
>> line ending type still matters for any cross-platform project.
>>
>
> So, after editing a file that belongs to your project with Notepad on
> Windows, would you expect an SCM complaining about it when you commit?
>
> I wouldn't.



Real-life case: a developer on a banking project i worked on edited a shell
script with notepad and checked it in (without having tested it). The SCM
did not complain about Notepad-injected \r characters. After deployment on
the live system this script started exiting with "bad interpreter". The
reason, which we discovered only after opening the script in Emacs, which
shows \r as ^M, was that notepad had mangled it, changing the first line to:

#!/bin/sh^M

The ^M (\r) is just another character for most Unix tools, and the system
was treating the \r as part of the shell's name, which of course didn't
work.


-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users