[fossil-users] Are commits with "a lot" of small files still an issue?

2018-06-19 Thread Martin Vahi

I have not experienced any problems like that myself, but
when searching for online documentation for recursive
unversioned insertion of files, id est I wanted to
use the

fossil unversioned add

recursively without it telling me that paths with "./" like

fossil unversioned add ./foo/bar_01
fossil unversioned add ./foo/bar_02

are, I quote: "not an acceptable filename"

---You-may-skip-reading-this-citation--but-here-it--starts---
ts2@linux-f26r:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/si
lktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/wiki_references/2017/software/simulators/ns3_Discrete-event_Network_Simulator/doc$
fossil unversioned add
./2017_06_13_wget_copy_of_https_www_nsnam.org_documentation/bonnet/www.nsnam.org/docs/release/3.26/doxygen/structns3_1_1_ul_grant__s.html
''./2017_06_13_wget_copy_of_https_www_nsnam.org_documentation/bonnet/www.nsnam.org/docs/release/3.26/doxygen/structns3_1_1_ul_grant__s.html''
is not an acceptable filename
ts2@linux-f26r:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/si
lktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/wiki_references/2017/software/simulators/ns3_Discrete-event_Network_Simulator/doc$
ls -l
./2017_06_13_wget_copy_of_https_www_nsnam.org_documentation/bonnet/www.nsnam.org/docs/release/3.26/doxygen/structns3_1_1_ul_grant__s.html
-rwxr-xr-x 1 ts2 users 17646 Apr  5 21:25
./2017_06_13_wget_copy_of_https_www_nsnam.org_documentation/bonnet/www.nsnam.org/docs/release/3.26/doxygen/structns3_1_1_ul_grant__s.html
ts2@linux-f26r:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/si
lktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/wiki_references/2017/software/simulators/ns3_Discrete-event_Network_Simulator/doc$
---the-end-of-the-semi-irrelevant-citation

I found one very old, year 2014 blog post, where
a person, who supposedly works on game development,
had problems with data imports that consisted
of a lot of small files. A citation from the
2014 blog post at


https://www.omiyagames.com/blog/2014/02/15/farewell-fossil-version-control/
(archival copy: https://archive.is/uBr3s )

--citation--start
I use Unity, and I import a lot of things
   from the Unity Asset Store. Importing things
   from the asset store sadly proved to be a
   gigantic problem for Fossil. See, Fossil is
   not only terrible at dealing with large files
   (like most distributed revision controls are),
   but also terrible at dealing with very large commits.
--citation--end--

I do not exactly know, what he means by "importing from
the asset store", but I do know that games
have data items like images/sprites, 3D_models and
3D model analogues of "animated Gif files"(3D sprites),
sounds, game levels, etc. and those can be pretty huge.
I think that it might be pretty fun to know, whether the
Fossil can hold up to the demands of applied statisticians or,
the way they tend to call themselves nowadays, "data scientists".
Those loads from the various IoT swarms might be a huge
set of relatively small files, yet, to re-analyze the data
for verifying results the data used for scientific papers
should be versioned, so that independent parties can
check the calculations. Fossil would be PERFECT for
encapsulating both, the scientific paper, the exact
copies and versions of tools used for the scientific paper,
the tech notes and/or research log, as the wiki is also
versioned, and the exact data set that was used
for making the presented conclusions.

The reason, why I write the current letter
is that IF the insertion of large amounts of
very small files is still an issue, then
there would be an opportunity to fix it
before anybody reports the issue.

Thank You :-)




___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report: Fossil 2.3 runs out of Memory During Check-in

2018-01-26 Thread Martin Vahi
> Date: Mon, 22 Jan 2018 09:06:10 -0500
> From: Richard Hipp <d...@sqlite.org>
> On 1/22/18, Martin Vahi <martin.v...@softf1.com> wrote:
>>
>> citation--start---
>> Fossil internal error: out of memory
>> citation--end-
>>
>> It happened during the execution of the
>>
>> fossil ci
>
> Do you have a test case that we can use for debugging?
>...

Now I do. A ~18.3GiB file resides at


https://temporary.softf1.com/2018/bugs/2018_01_26_Fossil_out_of_RAM_bug_test_case_t1.tar.xz

SHA256:
e671cbfc804b91d2295e00deae5f9ca4ab81b7c8a94ee7a3c7a2118ef952d2f9

The tar.xz can be also downloaded with BitTorrent.
The torrent file resides at:


https://temporary.softf1.com/2018/bugs/2018_01_26_Fossil_out_of_RAM_bug_test_case_t1.tar.xz.torrent

After unpacking the tar.xz, the tar-file is about 23GiB.
About  5GiB of it is the Fossil repository file and
about 17GiB of it is a tar-file with about 140k files that
the test tries to insert to the fossil repository.
The test script makes a copy of the 5GiB Fossil repository
file, runs "fossil open" on it, which copies files
from the repository file copy to the temporary sandbox folder,
and then the test unpacks the 17GiB tar-file to
the fossil sandbox folder and runs "fossil add" on the
new files. The overall HDD requirement is roughly

~18GiB + (3 * ~23GiB)  = ~87GiB === ~90GiB

I'll probably delete the tar.xz and the torrent file
from my home page after a few months, depending on
how much I need the HDD space at my hosting account.

Thank You (all) for the help and for the comments.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report: Fossil 2.3 runs out of Memory During Check-in

2018-01-22 Thread Martin Vahi

citation--start---
Fossil internal error: out of memory
citation--end-

It happened during the execution of the

fossil ci

Given that the Fossil had an opportunity to allocate
at least 1GiB of RAM without running out of RAM,
the issue must have something to do with the
algorithm. In my opinion the correct
case might be that Fossil should be able to run even
on the old Raspberry Pi 1 that has 512MiB RAM
in total and Fossil should just look,
how much free RAM the computer has and
adjust its algorithm parameters accordingly.

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] How to Embed WebM Videos to Fossil Wiki Pages?

2018-01-03 Thread Martin Vahi

I know that the way to embed an image is by

![name of the link]()

but it does not seem to work with the WebM videos.

https://www.webmproject.org/


WebM is a video file format that all modern web browsers
are able to play by default. WebM files can be
inserted into web pages just like ordinary images/pictures
can be embedded: upload the file to a web server and
the web server does not need any special capabilities for
serving them. One of the benefits of the WebM format
is that videos can be scrolled without downloading the
whole video file before scrolling and the scrolling functionality
works with ordinary, mainstream web servers.

A video for trying out the scrolling possibilities
of a WebM video file:

(The file is ~386MiB)

https://archive.softf1.com/2015/2015_12_27_ccc_de_Joanna_Rutkowska_Towards_Reasonably_Trustworthy_x86_Laptops.webm

On Linux WebM files can be created by

nice -n20 ffmpeg -i ./Schooby_Doo_video.mp4 ./Spooky_video.webm
# It takes a while to convert, hence the "nice".

Thank You for reading my letter &
thank You for the answers  :-)


P.S. I briefly surfed the Fossil mailing list
archive at my local computer and I suspect that
I asked something similar in the past. The lack
of WebM video support seems to be something that
I seem to stumble upon repeatedly.

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report: Cloning with --private Fails

2017-11-01 Thread Martin Vahi
>...
> Date: Tue, 24 Oct 2017 09:14:59 -0400
> From: Karn Kallio <tierplusplusli...@skami.org>
> To: fossil-users@lists.fossil-scm.org
> Subject: Re: [fossil-users] Bug Report: Cloning with --private Fails
> Message-ID: <20171024091459.3053205e@eka>
> ...
> 
> Also, with fossil 2.3 after cloning a private branch you will likely
> encounter errors when trying to synchronize it, such as this :
> 
> Error: Database error: UNIQUE constraint failed: private.rid: {INSERT
> INTO private VALUES(4)}
> 
> Inspecting the source, it seems that in the function content_put_ex in
> the file content.c that only a new rid should be marked as private,
> since a rid that adds data to a phantom will already have had the
> private marking done when the phantom was added.
> 
> The following patch ...
>...



>...
> Date: 23 Oct 2017 19:27:08 -0600
> From: "Andy Bradford" <amb-fos...@bradfords.org>
> To: "Martin Vahi" <martin.v...@softf1.com>
> Cc: fossil-users@lists.fossil-scm.org
> Subject: Re: [fossil-users] Bug Report: Cloning with --private Fails
> Message-ID: <20171023192708.22275.qm...@angmar.bradfordfamily.org>
> ...
>
> Thus said Martin Vahi on Mon, 23 Oct 2017 11:27:03 +0300:
>
>> It doesn't even prompt for a password.
>
> You didn't give it a username for which it should prompt.
>
> Try:
>
> time nice -n18 fossil clone --unversioned --private --admin-user
https://usern...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
./repository_storage.fossil
>
> Where username  is your username  that you want  to clone with.  Or, you
> need to give the nobody user the right to clone private content.
>
> Andy
>...

Thank You (all) for the kind answers and help, but,
unfortunately I suspect that there is still some work to be done.
Either I am misusing the Fossil, which I tend to do, or
the "--private" triggers some kind of a flaw/bug.


The failing command line with the key phrase
"server returned an error - clone aborted":

citation---start--

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
SQLITE_TMPDIR=`pwd`/tmp_  time nice -n18 fossil clone --unversioned
--private --admin-user martin_vahi
https://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
./repository_storage.fossil
Round-trips: 2   Artifacts sent: 0  received: 238
Error: not authorized to sync private content
Round-trips: 2   Artifacts sent: 0  received: 238
Clone done, sent: 681  received: 65921513  ip: 185.7.252.74
server returned an error - clone aborted
3.31user 2.32system 7:36.38elapsed 1%CPU (0avgtext+0avgdata
49660maxresident)k
280inputs+56208outputs (2major+24160minor)pagefaults 0swaps

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
uname -a
Linux linux-0fiz 3.16.7-53-desktop #1 SMP PREEMPT Fri Dec 2 13:19:28
UTC 2016 (7b4a1f9) x86_64 x86_64 x86_64 GNU/Linux

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
date
Wed Nov  1 01:56:01 EET 2017

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
fossil version
This is fossil version 2.3 [f7914bfdfa] 2017-07-21 03:19:30 UTC

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
citation---end


The same command line, but without the "--private" works:

citation---start--

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
SQLITE_TMPDIR=`pwd`/tmp_  time nice -n18 fossil clone --unversioned
--admin-user martin_vahi
https://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
./repository_storage.fossil
Round-trips: 51   Artifacts sent: 0  received: 54802
Clone done, sent: 15611  received: 4764758194  ip: 185.7.252.74
Rebuilding repository meta-data...
  100.0% complete...
Extra delta compression...
Vacuuming the database...
project-id: 26101fc480a34b3b993c8c83b7511840ab9d0c17
server-id:  5787d7cb7d0db19d65cdab01054d02366bb2ad1c
admin-user: martin_vahi (password is "c16a98")
457.41user 431.44system 5:26:04elapsed 4%CPU (0avgtext+0avgdata
2337020maxresident)k
48059520inputs+55060832outputs (45122major+4770331minor)pagefaults
0swaps

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
fossil version
This is fossil version 2.3 [f7914bfdfa] 2017-07-21 03:19:30 UTC

ts2@linux-0fiz:~/Projektid/progremise_infra

[fossil-users] Bug Report: Cloning with --private Fails

2017-10-23 Thread Martin Vahi

It doesn't even prompt for a password.
Both, client side and server side, Fossil binaries
are of version 2.3

---citation--start-
time nice -n18 fossil clone --unversioned --private --admin-user
martin_vahi
https://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
./repository_storage.fossil
Round-trips: 2   Artifacts sent: 0  received: 238
Error: not authorized to sync private content
Round-trips: 2   Artifacts sent: 0  received: 238
Clone done, sent: 683  received: 65921513  ip: 185.7.252.74
server returned an error - clone aborted

real1m31.898s
user0m5.254s
sys 0m2.647s
ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla$
---citation--end


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report: Fossil 2.3 "make test" Fails: test pre-commit-warnings-fossil-1 FAILED!

2017-10-20 Thread Martin Vahi

Reproduction:

Download and extract the

https://www.fossil-scm.org/index.html/uv/fossil-src-2.3.tar.gz


At the root of the source tree do with the GCC

./configure --prefix=
--json --with-th1-docs --with-th1-hooks --disable-fusefs
make   # was/is successful
make test  # is where the failure(s) occur


---citation--start-
ts2@linux-0fiz:~/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3$ make test
tclsh ./src/../test/tester.tcl fossil -quiet
ERROR: current directory is not within an open checkout
test pre-commit-warnings-fossil-1 FAILED!
RESULT: current directory is not within an open checkout
*** Error in
`/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil':
free(): invalid pointer: 0x0144c520 ***
=== Backtrace: =
/lib64/libc.so.6(+0x727df)[0x7efe6723a7df]
/lib64/libc.so.6(+0x7804e)[0x7efe6724004e]
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil[0x5210ab]
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil[0x43ffbc]
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil[0x46b267]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7efe671e9b05]
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil[0x404979]
=== Memory map: 
0040-00776000 r-xp  08:11 127018800
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil
00975000-00976000 r--p 00375000 08:11 127018800
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil
00976000-0097f000 rw-p 00376000 08:11 127018800
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/_home/m_local/bin_p/fossil_scm_org/kompil/fossil-2.3/fossil
0097f000-00983000 rw-p  00:00 0
0144a000-014fa000 rw-p  00:00 0
[heap]
7efe66dad000-7efe66dc3000 r-xp  fe:01 4324341
/lib64/libgcc_s.so.1
7efe66dc3000-7efe66fc2000 ---p 00016000 fe:01 4324341
/lib64/libgcc_s.so.1
7efe66fc2000-7efe66fc3000 r--p 00015000 fe:01 4324341
/lib64/libgcc_s.so.1
7efe66fc3000-7efe66fc4000 rw-p 00016000 fe:01 4324341
/lib64/libgcc_s.so.1
7efe66fc4000-7efe66fc7000 r-xp  fe:01 4199866
/lib64/libdl-2.19.so
7efe66fc7000-7efe671c6000 ---p 3000 fe:01 4199866
/lib64/libdl-2.19.so
7efe671c6000-7efe671c7000 r--p 2000 fe:01 4199866
/lib64/libdl-2.19.so
7efe671c7000-7efe671c8000 rw-p 3000 fe:01 4199866
/lib64/libdl-2.19.so
7efe671c8000-7efe67369000 r-xp  fe:01 4199794
/lib64/libc-2.19.so
7efe67369000-7efe67568000 ---p 001a1000 fe:01 4199794
/lib64/libc-2.19.so
7efe67568000-7efe6756c000 r--p 001a fe:01 4199794
/lib64/libc-2.19.so
7efe6756c000-7efe6756e000 rw-p 001a4000 fe:01 4199794
/lib64/libc-2.19.so
7efe6756e000-7efe67572000 rw-p  00:00 0
7efe67572000-7efe67587000 r-xp  fe:01 4202036
/lib64/libz.so.1.2.8
7efe67587000-7efe67786000 ---p 00015000 fe:01 4202036
/lib64/libz.so.1.2.8
7efe67786000-7efe67787000 r--p 00014000 fe:01 4202036
/lib64/libz.so.1.2.8
7efe67787000-7efe67788000 rw-p 00015000 fe:01 4202036
/lib64/libz.so.1.2.8
7efe67788000-7efe67981000 r-xp  fe:01 4200691
/lib64/libcrypto.so.1.0.0
7efe67981000-7efe67b8 ---p 001f9000 fe:01 4200691
/lib64/libcrypto.so.1.0.0
7efe67b8-7efe67b9b000 r--p 001f8000 fe:01 4200691
/lib64/libcrypto.so.1.0.0
7efe67b9b000-7efe67ba8000 rw-p 00213000 fe:01 4200691
/lib64/libcrypto.so.1.0.0
7efe67ba8000-7efe67bac000 rw-p  00:00 0
7efe67bac000-7efe67c0a000 r-xp  fe:01 4233868
/lib64/libssl.so.1.0.0
7efe67c0a000-7efe67e0a000 ---p 0005e000 fe:01 4233868
/lib64/libssl.so.1.0.0
7efe67e0a000-7efe67e0e000 r--p 0005e000 fe:01 4233868
/lib64/libssl.so.1.0.0
7efe67e0e000-7efe67e14000 rw-p 00062000 fe:01 4233868
/lib64/libssl.so.1.0.0
7efe67e14000-7efe67f14000 r-xp  fe:01 4199881
/lib64/libm-2.19.so
7efe67f14000-7efe68113000 ---p 0010 fe:01 4199881
/lib64/libm-2.19.so
7efe68113000-7efe68114000 r--p 000ff000 fe:01 4199881
/lib64/libm-2.19.so
7efe68114000-7efe68115000 rw-p 0010 fe:01 4199881
/lib64/libm-2.19.so
7efe68115000-7efe68135000 r-xp  fe:01 4217954
/lib64/ld-2.19.so
7efe68285000-7efe6828a000 rw-p  00:00 0
7efe68332000-7efe68335000 rw-p  00:00 0
7efe68335000-7efe68336000 r--p 0002 fe:01 4217954
/lib64/ld-2.19.so
7efe68336000-7efe68337000 rw-p 00021000 fe:01 4217954
/lib64/ld-2.19.so
7efe68337000-7efe68338000 rw-p  00:00 0
7fffd840f000-7fffd8436000 rw-p  00:00 0
[stack]
7fffd8446000-7fffd8448000 r-xp  00:00 0
[vdso]
7fffd8448000-7fffd844a000 r--p  00:00 0
[vvar]
ff60-ff601000 r-xp  00:00 0
[vsyscall]
ERROR: child killed: SIGABRT
test simplify-name-101.1 FAILED!
RESULT: child killed: SIGABRT
test json-cap-POSTenv-name FAILED (knownBug)!
ERROR: HTTP/1.0 200 OK

[fossil-users] Some VACUUM/REBUILD Related Feedback/Feature_Request

2017-10-17 Thread Martin Vahi

I'm not saying that it's necessarily a bug, but
it would be nice if during the VACUUM'ing phase of the

fossil rebuild a_huge.fossilrepository --vacuum --compress --cluster

the Fossil Web GUI would say that the repository is
currently being rebuilt in stead of saying

citation--start-
SQLITE_BUSY: database is locked

SQLITE_MISUSE: API called with NULL prepared statement

SQLITE_MISUSE: misuse at line 77320 of [47cf83a068]

SQLITE_BUSY: database is locked

SQLITE_MISUSE: API called with NULL prepared statement

SQLITE_MISUSE: misuse at line 77320 of [47cf83a068]

SQLITE_BUSY: database is locked

SQLITE_MISUSE: API called with NULL prepared statement

SQLITE_MISUSE: misuse at line 77320 of [47cf83a068]

SQLITE_BUSY: database is locked

SQLITE_MISUSE: API called with NULL prepared statement

SQLITE_MISUSE: misuse at line 77320 of [47cf83a068]

SQLITE_BUSY: database is locked

SQLITE_MISUSE: API called with NULL prepared statement

SQLITE_MISUSE: misuse at line 77320 of [47cf83a068]

SQLITE_BUSY: database is locked

Database Error

database is locked: {REPLACE INTO config(name,value,mtime)
VALUES('hash-policy',2,now())}
citation--end---

I was logged in as administrator from the web GUI and
the rebuild command was started from an ssh console.

The Fossil version is/was 2.3

uname -a
FreeBSD capella.elkdata.ee 10.3-RELEASE-p17 FreeBSD 10.3-RELEASE-p17 #0:
Fri Mar  3 18:23:48 EET 2017
r...@capella.elkdata.ee:/usr/obj/usr/src/sys/ELKDATA  amd64

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report: Square Brackets Illegal at the WYSIWYG

2017-10-08 Thread Martin Vahi


Fossil version: 2.2


Reproduction:


step_1)

Open the WYSIWYG Wiki Editor and write:

--citation--start---
range notation as known from math [1, ]
--citation--end-


step_2)

Push the "Apply These Changes" button.


step_3)

Observe how the  [1, ] has been converted to a link like

/wiki?name=1%2C+


Further Comments:

I failed to get any reliable results by
playing with the square brackets HTML entities




I understand that the square brackets have a
special meaning in the markup editor. Sometimes I've been
editing those wikis by explicitly writing the
markup language, but the task of the WYSIWYG Editor
is to do proper translation between the WYSIWYG view
and the whatever other format that is being used by the wiki.
Therefore it's a bug, NOT a feature :->


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report Update 2: Fossil Server side Corrupts if Client Process is Killed

2017-07-11 Thread Martin Vahi

After doing the

fossil rebuild

at the server side, central repository side,
I get the following line at the client side:

Fossil internal error: infinite loop in DELTA table

citation--start
COMMENTS.txt
milestone_releases/currently_there_is_none.txt
project-name: 
repository:
/home/ts2/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/repository_storage.fossil
local-root:
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/hiljem_siiski_varundatav/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/
config-db:
/home/ts2/m_local/bin_p/fossil_scm_org/vd2017_04_28/.fossil
project-code: 26101fc480a34b3b993c8c83b7511840ab9d0c17
checkout: 336bc431b271e19ae40a2824a5523e6c1cba447a 2016-05-14
08:07:53 UTC
parent:   2a2e3765cab0b0074208f20f02f177025648d6df 2015-01-31
16:44:26 UTC
tags: trunk
comment:  folder layout (user: martin_vahi)
check-ins:2
Pull from
https://martin_v...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/home
Round-trips: 1   Artifacts sent: 0  received: 14
Fossil internal error: infinite loop in DELTA table
citation--end--

I changed the passwords at my online copy of the
Fossil repository server side version and placed the old
version with old passwords at


https://temporary.softf1.com/2017/bugs/2017_07_11_server_side_Fossil_repository_of_the_bug_where_Ctrl_C_of_Fossil_client_currupts_SQLite_of_the_server.fossilrepository

It's about 5GiB, but may be it helps with debugging.

Thank You.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report Update 1: Failure to Recover when Previous Fossil Process got Killed

2017-07-11 Thread Martin Vahi

If I log into the web GUI of the remote Fossil repository
and use the

Admin -> Shunned -> Rebuild

then the web GUI shows me:

---citation--start

SQLITE_ERROR: no such table: ftsidx_segments

Database Error

no such table: ftsidx_segments:
{ DROP TABLE "ftsidx_segments"; DROP TABLE "ftsidx_segdir";
DROP TABLE "ftsidx_docsize"; DROP TABLE "ftsidx_stat"; }

---citation--end-


The cloning of the remote repository does not
give me all of the files that the remote repository
Web GUI shows to be present. The remote repository URL is:


https://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/home


Could someone please write,
how can I get the cloning to properly work again?

Thank You.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report: Failure to Recover when Previous Fossil Process got Killed

2017-07-11 Thread Martin Vahi

I do not know the exact operation that I killed by
the key combination Ctrl-C, but the operation was probably
one of the following:

fossil addremove
or
fossil commit
or
fossil pull
or
fossil push

The result is a "corrupted" repository file with a
"locked SQLite" database file.

I have not lost any data, because I can make a
clean clone of the remote/server repository and re-insert
the new files to the clone, but it would be nice, if the
Fossil were able to recover from that kind of situations
itself, automatically. In my view killing a program
at an arbitrary moment, in this case, killing the Fossil client, and then
restarting the program should not disrupt the work flow of
the end user/human. The Fossil client should just continue from
where it left off without requiring any manual intervention from
the end user.

Slightly edited excerpts of the console session:

$ fossil version
This is fossil version 2.2 [81d7d3f43e] 2017-04-11 20:54:55 UTC
$ uname -a
Linux linux-0fiz 3.16.7-53-desktop #1 SMP PREEMPT Fri Dec 2 13:19:28 UTC
2016 (7b4a1f9) x86_64 x86_64 x86_64 GNU/Linux



$ fossil ui ./repository_storage.fossil
Listening for HTTP requests on TCP port 8080
[14674:14674:0711/113948:ERROR:sandbox_linux.cc(343)]
InitializeSandbox() called with multiple threads in process gpu-process.
[14641:14747:0711/113949:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: PRAGMA auto_vacuum
[14641:14747:0711/113949:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: PRAGMA journal_mode = TRUNCATE
[14641:14660:0711/113949:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: PRAGMA auto_vacuum
[14641:14660:0711/113949:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: PRAGMA journal_mode = TRUNCATE
[14641:14747:0711/113950:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: PRAGMA cache_size=1000
[14641:14747:0711/113950:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: SELECT name FROM sqlite_master
WHERE type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: PRAGMA cache_size=32
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: SELECT name FROM sqlite_master WHERE
type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: SELECT name FROM sqlite_master WHERE
type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: SELECT COUNT(*) FROM sqlite_master
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: SELECT name FROM sqlite_master WHERE
type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Web sqlite error 5,
errno 0: database is locked, sql: CREATE TABLE meta(key LONGVARCHAR NOT
NULL UNIQUE PRIMARY KEY, value LONGVARCHAR)
[14641:14660:0711/113950:ERROR:web_database_backend.cc(113)] Cannot
initialize the web database: 1
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: PRAGMA auto_vacuum
[14641:14660:0711/113950:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: PRAGMA journal_mode = TRUNCATE
[14641:14659:0711/113950:ERROR:data_store_impl.cc(129)] Failed to open
Data Reduction Proxy DB: 3
[14641:14747:0711/113950:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: SELECT name FROM sqlite_master
WHERE type=? AND name=? COLLATE NOCASE
[14641:14747:0711/113950:ERROR:connection.cc(1892)] History sqlite error
5, errno 0: database is locked, sql: CREATE TABLE meta(key LONGVARCHAR
NOT NULL UNIQUE PRIMARY KEY, value LONGVARCHAR)
[14641:14660:0711/113951:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: PRAGMA cache_size=32
[14641:14660:0711/113951:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: SELECT name FROM
sqlite_master WHERE type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113951:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: SELECT name FROM
sqlite_master WHERE type=? AND name=? COLLATE NOCASE
[14641:14660:0711/113951:ERROR:connection.cc(1892)] Passwords sqlite
error 5, errno 0: database is locked, sql: CREATE TABLE meta(key
LONGVARCHAR NOT NULL UNIQUE PRIMARY KEY, value LONGVARCHAR)
[14641:14660:0711/113951:ERROR:login_database.cc(542)] Unable to create
the meta table.
[14641:14660:0711/113951:ERROR:password_store_default.cc(45)] Could not
create/open login database.
[14641:14660:0711/113951:ERROR:connection.cc(1892)] Shortcuts sqlite
error 5, errno 0: database is locked, sql: PRAGMA 

[fossil-users] Semi-Bug: File Browser in the web GUI is Relatively slow

2017-05-26 Thread Martin Vahi

Reproduction:

1)
Open the link behind the "Home" menu option, the page page at


https://www.softf1.com/cgi-bin/tree1/technology/flaws/mmmv_parasail_projects.bash/home

2)
Select the "Wiki" menu option and observe that
the site is "lightning fast", the new page
is displayed practically the moment the finger
comes back up at the mouse button. The same holds,
when web browser cache is emptied while stying
at the "Home" page and then moving to the "Wiki" page.

On google Crhome: Ctrl-Shift-Del

3)
While being at the "Home" or "Wiki" page,
select the "Files" menu option and observe
the multi-second delay. Navigating folders
at the Web based file browser is also
slightly sluggish.


Thank You for reading my letter.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug Report or a Feature Request: Cope with, Hosting Providers' Watchdogs

2017-05-21 Thread Martin Vahi

Thank You, everybody, for
the kind and thorough answers.

Date: Tue, 16 May 2017 18:15:56 -0400
From: Richard Hipp 
Subject: Re: [fossil-users] Bug Report or a Feature Request: Cope with
Hosting Providers' Watchdogs
>...
> What operation is it trying to do that takes more than 10 seconds?
> Usually Fossil runs for more like 10 milliseconds.
> ...
> Building big ZIP archives or tarballs takes long.  (FWIW, I have
> profiled those operations, and Fossil is spending most of its time
> inside of zlib, doing the requirement compression.)  Big "annotates"
> can also take some time if you have a long history.  Do you know what
> operations are timing out?
>...

I took the path of least resistance and switched from the
public URL of my repository to the ssh-protocol based access.
I was stupid enough to not to come to that idea before.
My hosting provider

(They do not have any English pages, because they
 target only the local, Estonian, market.)
https://www.veebimajutus.ee/

confirmed that the maximum age limit of their
public web query servicing processes is 10 minutes, but
without doing any measurements I suspect that I would have
needed even more than that. In the case of my hosting provider
the processes that are started from the SSH console
do not have the time-to-live limit, so switching
from https to SSH solved my problem, except that now
I ran into a different difficulty during the cloning of a repository:

console--session--excerpt--start-
Round-trips: 448   Artifacts sent: 0  received: 251146
Clone done, sent: 141797  received: 5866110695  ip: 185.7.252.74
Rebuilding repository meta-data...
  100.0% complete...
Extra delta compression...
Vacuuming the database...
SQLITE_FULL: statement aborts at 9: [INSERT INTO vacuum_db.'blob'
SELECT*FROM"repository".'blob'] database or disk is full
SQLITE_FULL: statement aborts at 1: [VACUUM] database or disk is full
fossil: database or disk is full: {VACUUM}
console--session--excerpt--end-

I suspect that the failure mechanism here is that the
cloning somehow uses the

/tmp

at the client side and if the HDD partition that contains the
/tmp is "too full", the cloning fails. I found out from

fossil-2.2/src/sqlite3.c

line 35265 that there's an opportunity to use the SQLITE_TMPDIR
environment variable, which I did and that solved my problem,
I eventually managed to clone my repository without problems.

I suggest that the Fossil code might be updated
by including a test that checks the size of the repository
at the server side and then checks the free space at the
partition that includes the path described at the SQLITE_TMPDIR
or if the SQLITE_TMPDIR is not set, then the check looks at the the
available free space at the partition that includes the /tmp.
If there's not enough free space, the fossil should exit with an error.
The exiting should be done before downloading
any artifacts and the stderr should include a message
that hints that the problem might be solved by
giving SQLITE_TMPDIR a value that refers a folder
that resides at a partition with at least
 MiB of free space.

Thank You, everybody, for Your answers and help and
thank You for reading my comment/letter.

Regards,
martin.v...@softf1.cm


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Report or a Feature Request: Cope with Hosting Providers' Watchdogs

2017-05-16 Thread Martin Vahi


I'm not totally sure, what the issue in my case is, but
I suspect that the issue might be that the hosting provider
has some time-to-live limit for every operating
system process that is started for serving a request and
if the Fossil takes "too long" to process the request,
then it gets killed by the hosting provider's watchdog
before the Fossil process completes the commit operation.

Are there any "heart-beat" options available, where
a cron job might call something like

fossil --heartbeat --max-duration=10s

and during that "maximum duration" time period a small
chunk of the work gets done?

console--session--excerpt--start

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil open
../repository_storage.fossil
project-name: 
repository:
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/hiljem_siiski_varundatav/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/../repository_storage.fossil
local-root:
/opt/2dot7TiB_k8vaketas/ts2/mittevarundatav/hiljem_siiski_varundatav/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseerimishoidla/sandbox_of_the_Fossil_repository/
config-db:
/home/ts2/m_local/bin_p/fossil_scm_org/vd2017_04_28/.fossil
project-code: 26101fc480a34b3b993c8c83b7511840ab9d0c17
checkout: c260d3eb188e94d0e83a9807b5e7325f994956eb 2017-05-16
16:59:42 UTC
parent:   58fe99e749cc4e596ab81f7915b3b512a2d2ca17 2017-05-16
00:49:42 UTC
tags: trunk
comment:  wiki reference updates (user: martin_vahi)
check-ins:21

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil push --private
Push to
https://martin_v...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
Round-trips: 1   Artifacts sent: 0  received: 0
server did not reply
Push done, sent: 651648435  received: 12838  ip: 185.7.252.74

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ fossil version
This is fossil version 2.2 [81d7d3f43e] 2017-04-11 20:54:55 UTC

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$ uname -a
Linux linux-0fiz 3.16.7-53-desktop #1 SMP PREEMPT Fri Dec 2 13:19:28
UTC 2016 (7b4a1f9) x86_64 x86_64 x86_64 GNU/Linux

ts2@linux-0fiz:~/Projektid/progremise_infrastruktuur/andmevahetustarkvara/rakendusvõrgud/silktorrent/publitseeri
mishoidla/sandbox_of_the_Fossil_repository$
console--session--excerpt--end--


Thank You for reading my comment and
thank You for the answers.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Feature Request: Commit in Chunks to use less RAM

2017-05-16 Thread Martin Vahi


---console--session--excerpt--start
SQLITE_NOMEM: failed to allocate 651456745 bytes of memory
SQLITE_NOMEM: statement aborts at 22: [INSERT INTO
blob(rcvid,size,uuid,content)VALUES(6,662828201,'9b81ec309fc0c2f2278f386c8b1917359fe24bd8',:data)]

fossil: SQL error: out of memory
SQLITE_ERROR: statement aborts at 1: [ROLLBACK] cannot rollback - no
transaction is active
---console--session--excerpt--end--

The obvious line of thought might be that
I should just upgrade my hardware, but if there's
no fundamental reason, why files can not be
committed in chunks, then it would be nice if
the Fossil just took a look, how much RAM
is available and then use some formula to
allocate some percentage less than 100% of the
free RAM and then try to do all of the work
within that RAM. May be the default minimum
might be 5MiB (five mebibytes), but the minimum
might also be an optional console parameter.
The idea is that some embedded systems have only
64MiB of RAM, which is actually pretty much,
given that in the 90'ties my desktop computer
had about 40MiB. (I'm born in 1981, so I was
a teenager in the 90'ties.) That 40MiB allowed me
to play Wolfestein and, I do not remember exactly,
but may be even Doom_2.

The old Raspberry Pi 1, which has a single 700MHz CPU-core,
has about 400MiB of RAM (512MIB minus video memory).
The new Raspberry Pi 3, which has 4 CPU-cores, has
about 900MiB of RAM (1GiB minus video memory).
Given the "Trusted Computing" trends of the Intel and AMD CPUs,
a solution, where a private cluster of Raspberry Pi like
computers is used in stead of a single, "huge", PC,
becomes more and more relevant. There are even academic
projects that try to construct such personal work stations

https://suif.stanford.edu/collective/

The industry has been contemplating about virtual machine
based separation for quite a while:

https://www.qubes-os.org/

Although, in my view the most elegant solution
for separating and then "re-joining" different applications
on a single machine is the

https://genode.org/

Historically speaking, may be one of the first, if not the first,
cluster based personal work station idea was in the
form of the Plan_9 operating system, which seemed to
be more of a cluster based single server with terminals
than a workstation, but the general idea seems to be there.

http://www.plan9.bell-labs.com/wiki/plan9/plan_9_wiki/

That is to say, a cluster of Raspberry Pi like computers
has a long history of trial and error and even, when the
cluster has a lot of RAM, that RAM is separated to relatively
small chunks between the Raspberry Pi like computers.
The ability to either parallelize the task or to
try to get by with a small amount of RAM can be practical.


Thank You for reading my comment :-)


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Fossil Hosting by Generic PHP Hosting Services

2017-05-09 Thread Martin Vahi

Date: Sun, 23 Apr 2017 15:25:54 -0700 From: Richard Hipp
>  To: "Fossil SCM user's discussion"
>...
>  wrote:
>> Hey there,
>> I currently know of two Fossil hosting systems, Flint and Hydra.
> I'm going to take this in a different direction and suggest that the
> *best* Fossil hosting system is Linode (https://www.linode.com/).
>
> Linode is just a generic hosting provider.
>...

IF I remember correctly, then another option
that seems to work, at least with the wiki side,
I haven't tested it with "huge" file transfers, is that
a generic hosting service provider serves a page
through a plain PHP program, with no special setup
or special options, just the "vanilla" service,
but if the web server of the generic hosting provider
uses FastCGI for executing the PHP interpreter

https://en.wikipedia.org/wiki/FastCGI

(Interestingly the FastCGI's own site seems to be down.)
http://www.fastcgi.com/

then the CGI related environment variables are
already all evaluated for the FastCGI and any
console application that the PHP script executes can
read the CGI related environment variables. The PHP script
executes the Fossil binary as a plain console application,
directs the output of the console program,
in this case the Fossil binary, to a file and uses the
content of that file as a binary blob in the role
of a binary blob based PHP-script output. It just
so happens that in stead of some JPEG or something similar,
the file happens to be the stream of bytes that the
fossil console application returned.

The fossil binary can be uploaded to the
generic hosting service by using any FTP
access, as a plain file. The most difficult
part might be the cross-compilation of the
Fossil, because the Fossil binary must
match with the server CPU. The fact that
the hosting provider might change the
CPU type, reduces the reliability of that
solution, but the PHP-script may run the

uname -a

before executing the fossil console program and
verify that the "uname -a" output has not changed
after the fossil console program was uploaded
to the server. The PHP script might even choose
between different Fossil binaries.

I do not know, if that setup works with file transfers,
I haven't tested it with files, but
I do know that the Fossil has some setting parameter
that describes the preferred file chunk size and
the PHP has some maximum file upload size limit.
May be, if the PHP limit is set higher than the
Fossil preferred file chunk size, then may be
the file transfers also work. I haven't tested it.


Thank You for reading my comment :-)

Regards,
martin.v...@softf1.com


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Newbie Fossil Hosting and Culture

2017-03-26 Thread Martin Vahi
>...
> Date: Sun, 26 Mar 2017 13:18:08 -0400
> From: Richard Hipp 
>...
> Subject: [fossil-users] GitLab v. Fossil. Was: Eric Raymond (a.k.a.
>   ESR) haspublished an SCM
> Message-ID:
>   

[fossil-users] Is Fossil Hash-collision proof?

2017-03-21 Thread Martin Vahi


I haven't encountered any collisions yet, but
I was wondering, what will happen, if 2 different
files that have the same size, same timestamps,
different bitstreams, but the same hash (regardless of hash algorithm)
were to be committed simultaneously, at the same commit?

After all, it's the "improbable" corner cases that
accumulate and trash even those projects that are
not slammed together by sloppy-or-just-lacks-the-self-esteem-to-care
type of developers. Everything works fine for a
long-long-long period of time, until one day, in the midst
of being overwhelmed with some other task that
uses software X as one of its main tools...

Thank You.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Fossil_v_2_0 Bug: Abandoning commit due to long lines in Foo

2017-03-21 Thread Martin Vahi

I unpakced the stblob that I tried to commit
at the case that I described at my previous bug report
and I tried to commit the batch of small files
individually. No luck:

---citation--start

./wiki_references/2017/software/MaidSafe_net/doc/2017_03_21_wget_copy_of_blog_maidsafe_net/bonnet/blog.maidsafe.net/2013/07/25/open-source-more-freedom/index.html?share=facebook.html
contains long lines. Use --no-warnings or the "binary-glob" setting to
disable this warning.
Commit anyhow (a=all/y/N)?
Abandoning commit due to long lines in
./wiki_references/2017/software/MaidSafe_net/doc/2017_03_21_wget_copy_of_blog_maidsafe_net/bonnet/blog.maidsafe.net/2013/07/25/open-source-more-freedom/index.html?share=facebook.html
Push to
http://martin_v...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
Round-trips: 1   Artifacts sent: 0  received: 0
Push done, sent: 1237  received: 334  ip: 185.7.252.74
---citation--end---


P.S. Right after sending my previous Bug report
to this list, like, literally in 10 seconds,
I received a spam email that used my own subject
line with the "Re:" prefix as its email subject.
I suspect that either spam bots have been signing
up to this mailing list or someone, who receives the
mailing lists posts individually in stead of ordering
a Digest is using some crappy, bot-infested, email
service that scrapes email addresses from
all incoming email. I know that it is not me, because
it has not happened to me with other mailing lists or
at least I haven't noticed.

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Fossil_v_2_0 Bug: SQLITE_TOOBIG when Committing 1.6GiB size file

2017-03-21 Thread Martin Vahi

--citation--start-

./wiki_references/2017/software/2017_03_21_maidsafe_net_tar_gz/f59c44f4cfae80319685fc8abdbc374039c22b421141951ai_278e5c783e9d982543415b569b6095bde7f1e409077208c4bb6bc48ee3fefe3fh_0481949961s_1000v.stblob
contains binary data. Use --no-warnings or the "binary-glob" setting to
disable this warning.
Commit anyhow (a=all/y/N)? all
SQLITE_TOOBIG: statement aborts at 8: [INSERT INTO
blob(rcvid,size,uuid,content)VALUES(53,1699491840,'7514aa336e0c015e4363859df996ee30ee81538c',:data)]
string or blob too big
fossil: SQL error: string or blob too big
Push to
http://martin_v...@www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/
Round-trips: 1   Artifacts sent: 0  received: 0
Push done, sent: 1234  received: 334  ip: 185.7.252.74
--citation--end---

I tried to commit

(About 1.6GiB)

http://temporary.softf1.com/2017/bugs/f59c44f4cfae80319685fc8abdbc374039c22b421141951ai_278e5c783e9d982543415b569b6095bde7f1e409077208c4bb6bc48ee3fefe3fh_0481949961s_1000v.stblob


to a clone of the

http://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/

If You download .stblob, then
its integrity can be checked by


wget
http://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/raw/milestone_releases/silktorrent_packager_t1_2017_03_14.bash?name=433efaa35a360ddcdfba723979c682975286cd83
  --output-document=`pwd`/silktorrent_packager_t1.bash

chmod 0755 ./silktorrent_packager_t1.bash
silktorrent_packager_t1.bash verify `pwd`/.stblob

The .stblob files are just plain tar-files that have
multiple hashes(Tiger, SHA256) and their file size as part of
their file name. For example, the start of the .stblob file
can be generated by

cp .stblob ./x.stblob # can take time for the 1.6GiB
ruby -e "puts(ARGV[0].to_s.match(/^[abcdef\d]+/)[0].reverse)"
`tigerdeep ./x.stblob`


My environment:

--citation--start-
ts2@linux-0fiz:~/tmp$ uname -a
Linux linux-0fiz 3.16.7-53-desktop #1 SMP PREEMPT Fri Dec 2 13:19:28
UTC 2016 (7b4a1f9) x86_64 x86_64 x86_64 GNU/Linux
ts2@linux-0fiz:~/tmp$ fossil version
This is fossil version 2.0 [1d407cff32] 2017-03-03 12:00:30 UTC
ts2@linux-0fiz:~/tmp$ date
Tue Mar 21 09:34:59 EET 2017
ts2@linux-0fiz:~/tmp$
--citation--end---


P.S. A bit out of topic comment, but the idea
behind the tar-files that have their sizes and hashes
as part of their file names is that this way even the
No_Such_Agency/KGB/FSB/STASI/KAPO/SUPO/SÄPO can offer
drop-box-service over Tor to anybody without having
an ability to modify any of the files without getting
caught. If the guys at the various intelligence agencies
were smarter than they currently are, then they would
understand that in the future wars are fought with robots,
which need targets, which need target-detection, which
takes a lot of computing power to do if 3D models of
battle fields are calculated bit-hospital-MRI-style from
radar images and drones that have to stay up long and be
light can not carry energy hungry computers that are stacked
in piles in data centers and special data-center-warships,
meaning: the https://en.wikipedia.org/wiki/Utah_Data_Center
would be one of the first targets for China/Russia/whomever,
but if the Utah datacenter offered an economically relevant
drop-box service or something similar to the
https://www.clockss.org/
to everybody, including Chinese/Russian/European/whoeverElse
businesses and governments, then sending a rocket to the
Utah data center would hurt the adversary's own businesses.
After all: the most important service of the nasty
European_Union/Brussles bureaucracy is to keep Europe in piece
by using economic inter-dependencies. But, of course, hoping
that a huge giga-government-bureaucracy, like the Washington is,
would thing that strategically, specially given, how the last
presidential elections in the U.S. were won, is a really tall order :-/

Anyways, Thank You for reading. That was just the background story
that answers the question, how can something as primitive as
plain tar-files with hashes in their names be of any huge,
strategic, importance :-)



___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Feature Request: Include file Names of wiki Page Attachments to the wiki Search Results

2017-03-08 Thread Martin Vahi

A test case is that I want attachments from


http://www.softf1.com/cgi-bin/tree1/technology/flaws/silktorrent.bash/wiki?name=Attic+001+for+Holding+Various+Files

to appear at my Fossil wiki search results. I'm not
asking for the content of the PDF-files to be analyzed, only
their file names.

As a side note I mention that while trying to find out,
whether that feature has been already included to the Fossil v_2_0,
I found the following, obsolete, page from the official Fossil site:

   https://www.fossil-scm.org/xfer/wiki?name=Fossil+2.0

Thank You.


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug Candidate: wall Clock as a show Stopper at Cloning

2017-01-24 Thread Martin Vahi


I believe that this is a flaw/bug, because
it is obvious that clocks of different computers
can differ for various reasons. A value of a clock
should not be a stopper at cloning a repository.
If the difference between the clocks of 2 computers
is also known, then one time is even convertible
to another time and vice versa.

-citation---start---
*** time skew *** server is slow by 2.0 minutes
Clone done, sent: 1262  received: 1121  ip: 195.250.189.35
server returned an error - clone aborted
-citation---end---


This is fossil version 1.35 [3aa86af6aa]

Thank You :-)

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Repo Checksum Speedup Idea: flaw in my comment

2016-12-05 Thread Martin Vahi
As it turns out, I already made a mistake
at the tree based algorithm.

The old, proposed, flawed version:
> ...
> array_of_nodes_with_wrong_x_node_hash=unique_by_node_ID(
>   clone(
>   ob_AVL_tree.array_of_nodes_that_had_changes_on_path_2_root
>   ).concat(clone(
>   ob_AVL_tree.array_of_nodes_that_have_changed_children
>   // change within children means that any
>   // of the children changed during the insertion
>   // or removal of nodes to/from the ob_AVL_tree
>   // after the AVL-tree got (automatically) rebalanced.
>   // A change between null and an existing child is
>   // also considered a change.
>   ))
>   ).sort_by_path_length_from_node_to_root_node
> 
> ...

A newer version looks only the changes within
children:

array_of_nodes_with_wrong_x_node_hash=unique_by_node_ID(
  clone(
  ob_AVL_tree.array_of_nodes_that_have_changed_children
  )).sort_by_path_length_from_node_to_root_node

Thank You.

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Repo Checksum Speedup Idea

2016-12-05 Thread Martin Vahi
>   De : Joerg Sonnenberger 
>  À : fossil-users@lists.fossil-scm.org
>  Envoyé le : Dimanche 4 décembre 2016 20h55
>  Objet : Re: [fossil-users] Bug report: Terrible Performance, when
Checking in LLVM Source
> ...
> No. What repo checksum does is compute a separate checksum over the
> concatenation of all files.
>
> Joerg
> ...

Thank You all for the answers to my previous
bug report letters. I have not made up my mind,
how to proceed with the large repo case, but
2 things I know for certain:

certain_thing_1)
Hash algorithms will evolve and whatever
Fossil uses in 2016_12 for the checksum,
will become obsolete.

certain_thing_2)
Regardless of what the hash algorithm is,
there exists at least one solution, that
allows to calculate a checksum of a
concatenation of a large set of files
without re-calculating the hashes of all
of the files that did not change.


The naive and slow version:

array_of_relative_file_paths=[ file_1, file_2, ..., file_N ]
blob_1=null
i_len=array_of_relative_file_paths.length
if 0,
// the blob_1 needs to be, if not allocated like
// in the above pseudocode, then at least fed in to the hash(...)
// in some stream fashion and the hash(...) has to
// re-process the file_1, file_2, ...


The gist of the proposed idea is to place
various hashes to a tree and then hash both, files
and hashes of the files, giving a probabilistic opportunity
to avoid running the hash function on all of the files
after the collection of files has changed:

ob_AVL_tree=AVL_tree_class.new

function_instance_1=(f_arg_ob_node_reference,f_arg_x_file_hash){
x_hash_left=hash(null)
x_hash_right=hash(null)

ob_child_left=f_arg_ob_node_reference.get_child_left()
ob_child_right=f_arg_ob_node_reference.get_child_right()
if ob_child_left != null {
x_hash_left=ob_child_left.record.x_node_hash
} // if
if ob_child_right != null {
x_hash_right=ob_child_right.record.x_node_hash
} // if

x_bytestream=concat(x_hash_left,
"_separator_",
x_hash_right
"_separator_",
f_arg_x_file_hash)
return hash(x_bytestream)
}

array_of_relative_file_paths=[ file_1, file_2, ..., file_N ]
i_len=array_of_relative_file_paths.length
if 0

[fossil-users] Bug report addon: may be it's the push commad

2016-12-03 Thread Martin Vahi
I forgot to mention that I also push the commited
files to a remote repository.
The uplink is ~1MiB/s (~10Mbps) and the ping
from my local machine to the remote machine is about 20ms

So, even, if files were uploaded one by one,

~100k-files * 20ms = ~ 100 * 20 s = 2000s  < 3600s = 1h

So far it has taken over 6 hours.
The 4.4GiB of data should be uploaded in about 2h
(1MiB/s, 3600MiB/h )

The fossil process also takes up 100% of one of
the 3GHz CPU-cores, more than 6h in a row.
Fossil version: custom-compiled 1.35
Compile flags included -mtune=native -ftree-vectorize
CPU is AMD 64bit, ~3GHz

Remote repository URL (so that You can check the server response):
http://www.softf1.com/cgi-bin/tree1/technology/flaws/mmmv_parasail_projects.bash/

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Bug report: Terrible Performance, when Checking in LLVM Source

2016-12-03 Thread Martin Vahi
Reproduction:

1) Download LLVM source.
   It might be done by executing the bash script from

http://www.softf1.com/cgi-bin/tree1/technology/flaws/mmmv_parasail_projects.bash/artifact/3a28f1fb67a5d860

2) Commit the source.

It is about 4.4GiB, over 100k files, over 6k folders,
but it should not be that bad. After all, that's what
many projects look like in 2016.


Regards,
martin.v...@softf1.com

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Announcement of a Fossil Usability Enhancement Wrapper Script

2016-02-29 Thread Martin Vahi

I suggest that You do not try it about a week, because
I have not been able to extensively test it yet, neither
is it anything very optimized and never will be, as a
matter of fact, it's very inefficient, but it does
have a practical feature set and might help other people
in addition to me. It is Linux specific.

https://github.com/martinvahi/mmmv_notes/blob/master/mmmv_notes/mmmv_utilities/src/various_Bash_scripts/mmmv_Fossil_operator_t1.bash

Basically, it is very verbose and is intended to help people
use Fossil without testing out the various nuances that the Fossil
has. Inefficient, but robust and simple, something that can be
shipped to clients without requiring them to start digging
in Fossil documentation.

The verbose version

./mmmv_Fossil_operator_t1.bash overwrite_remote_with_local

has a not-loudly-advertised short version

./mmmv_Fossil_operator_t1.bash up

and  the short version for

 ./mmmv_Fossil_operator_t1.bash overwrite_local_with_remote

is

 ./mmmv_Fossil_operator_t1.bash down

and it JUST WORKS. There's no need to think, what the word
"private" means in Fossil vocabulary. The simple version

./mmmv_Fossil_operator_t1.bash clone_all

just does it and the only alternative is

 ./mmmv_Fossil_operator_t1.bash clone_public

which is may be not so smart wording, but may be it'll change
at some point.

To make things simple, there's even

 ./mmmv_Fossil_operator_t1.bash \
 overwrite_remote_with_local use_autogenerated_commit_message

and the

 ./mmmv_Fossil_operator_t1.bash shred_local_copy

deletes the fossilfile and the sandbox folder and
the automatically created archives folder, whcih
contains timestamped folders with the sandbox
content from the moment right before the sandbox
content is overwritten by the aforementioned

 ./mmmv_Fossil_operator_t1.bash overwrite_local_with_remote


The sole purpose of the script is to make Fossil
more user-friendly, easily usable by clients of
freelancers like me. :-)

As I said, I just uploaded it to github (about 15min
before writing this letter), so it will probably
have some flaws, I just do not know the flaws yet,
despite the fact that I test things before publishing.

Thank You for reading this letter. :-)



signature.asc
Description: OpenPGP digital signature
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Nothing Serious, just what I've been, very Slowly, Working on

2016-02-04 Thread Martin Vahi


> Date: Wed, 3 Feb 2016 12:20:42 +0100
> From: Jan Danielsson <jan.m.daniels...@gmail.com>
> To: fossil-users@lists.fossil-scm.org
> Message-ID: <56b1e28a.6080...@gmail.com>
> ...
> ...typically
> developers want things which work and will continue to do so for 20+
> years; and archives are hugely important.  Random Shiny Web2.0-company
> Based Communication Platform may or may not exist in two years.  Mailing
> lists do not depend on a particular company existing or not.
> ...
>Not that I don't get that there are good things about these new
> platforms, but I don't see what they offer which outweighs what I lose.
>
>If you want developers to move away from mailing lists, invent
> something which doesn't have all the drawbacks of other technologies,
> but improves on the things which are important to us.
> ...

Date: Wed, 3 Feb 2016 08:27:04 -0700
From: Warren Young <w...@etr-usa.com>
To: Fossil SCM user's discussion <fossil-users@lists.fossil-scm.org>
Message-ID: <848f7448-5131-4606-ae05-503320a66...@etr-usa.com>
> ...
>On Feb 2, 2016, at 3:19 PM, Martin Vahi <martin.v...@softf1.com> wrote:
>>
>> Whether the Telegram sticks, I think that
>> probably NOT, because it is not totally anonymous,
>> not even totally private, depends only on one vendor
>> and that vendor applies some censorship just to
>> avoid being closed down by different governments,
>
>Every other one of the items on that list I posted
>has at least one of these weaknesses.
>The single biggest one is being proprietary.
>>...
>I'll be the first to agree that Internet email is
>far from perfect, but it remains the Internet's only
>fully-decentralized non-proprietary federated
>communications system.
>>...
>
>Anything wanting to replace email will have to
>cover all of those bases and then offer a significant
>feature increase before it can be expected to
>sweep aside 40 years of incumbency.
>
>More likely, email?s replacement will replace it
>from within, in the same way and with
>approximately the same speed as IPv6 is replacing IPv4.
>>...


Thank You for Your answers and comments.

No promises of any kind by me here now, but what
I gather from the cited statements is that
I've been at a very right track with my
years-in-extremely-slow-and-tedious-development

http://www.silktorrent.ch
(redirects to my softf1.com)

Specially one of its sub-specifications that
covers addressing:

http://longterm.softf1.com/specifications/lightmsgp/v2/

The reason, why I send the current letter is that
I hope that You will point out some obvious and
highly visible flaws that I have missed but You
might notice just by throwing quick glance at
the specification.

I do not have anything proper for encrypting
big files yet, but for e-mail text I've
developed

https://github.com/martinvahi/mmmv_devel_tools/tree/master/src/mmmv_devel_tools/mmmv_crypt_t1

The mmmv_crypt_t1 is not final and what I've found
out is that encryption algorithm should not be
part of a specification, which means that a temporary
hack might be using GNU Privacy Guard public key
encryption key pairs as a shared secret "symmetric"
key. The reason, why I do not trust any public key
encryption algorithm is described at

http://bitrary.softf1.com/index.php?title=Software_Development_:_Security_:_Cryptography#Why_Public_key_Cryptography_is_Fundamentally_Flawed


Thank You. :-)





signature.asc
Description: OpenPGP digital signature
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil-users Digest, Vol 97, Issue 2

2016-02-02 Thread Martin Vahi
> Date: Mon, 1 Feb 2016 20:30:21 -0700
> From: Warren Young <w...@etr-usa.com>
> ...
> On Jan 31, 2016, at 5:17 PM, Martin Vahi <martin.v...@softf1.com> wrote:
>> one of the particularities of the WebM format is
>> that huge videos can be served from ordinary
>> HTML-web server and scrolled without downloading
>> the whole file.
> 
> That doesn?t happen for free.
>  
> I happen to have a web app that serves content via  
> where the src attribute simply points to the
> video file on the server, and I assure you, the
> whole file does get transferred on the initial click,
> at least with the Chrome + Apache 2.2 pair.
> (I could test other pairs, but I can?t be bothered right now.)
> In my testing, the file starts playing before 
> the last byte arrives, but
> that's just an optimization.
>
> The rest of the file is still streaming
> in continuously, even with a file nearly a gigabyte in size.
> ...

Thank You for that note. I looked at it
with the browser built-in devel tools Network
graph and indeed, You are right, but I think
that as long as the machine has enough RAM,
the downloaded data size does not matter.
Often times, as can be seen from

http://archive.softf1.com/2014/ted_com/

the whole WebM files are about 100MiB, sometimes
even just 30MiB in size and having an opportunity
to reference a 20min video from Fossil Wiki
offers a lot. All I need is that the video
files are treated like PDF-files and that the HTML video
tag is legalized at the Fossil Wiki.

To put it directly, yes, data transfers have to be
studied carefully, but for some reason people find it
more optimal to program desktop computers far more
wastefully than they program microcontrollers and the
modern "fast-and-lean" C code is bloatware, if compared
to the very old-school ASM. Even the requirement that
a computer has a colour monitor can be tough sometimes, but
that is still not a reason to throw
essential media features out of the window.
Video is like color images were in the past and as with
everything, the end user is required to have some smarts
to use its tools wisely, id est place JPEG's in stead of
BMP-s to PDF-files, distribute podcasts in OGG or alike
in stead of distributing them in WAV, etc. The same
with videos: video files have a SIZE and the end users
better know it. The way I understand, the target audience
of the Fossil is people with deep technical background,
not novice office workers. (Seasoned professional
secretaries actually know about image file formats
and alike.)

> ...
>> e-mail is fundamentally outdated technology
>> and mailing lists can just move over to the
>>
>> https://www.telegram.org/
> 
> Um, yeah.
> 
> 1994: Commercial Internet begins; everyone gets an email address
> 1997: ?Email is outdated, everyone just uses AIM/ICQ/Messenger!?
> 1999: ?Email is outdated, everyone just uses web forums!"
> ...
> 2013: ?Email is outdated, everyone just uses Vine!?
> 2015: ?Email is outdated, everyone just uses Telegram!?
> 
> ...
> 
> Bottom line, email refuses to go away.  Deal with it.
> ...

Thank You for the comparison. What regards to
Your statement that "e-mail refuses to go away", then
my answer is that the majority of the population on
planet Earth does not consist of software developers
or old-school communications technicians/electronics engineers
and therefore uses the tools that the people
at technical professions have to offer. If the
AIM/ICQ/Vine/Google+/Buzz... was a temporary
phenomenon and people still revert back to
the old, unencrypted, un-anonymized e-mail, then
the only ones to blame are the software developers,
not the dentists, neuro-surgeon, biologists,
mathematicians, accountants, cooks, airline service providers.

Secondly, some change CAN BE expected. Nobody but
myself would read this letter, if it were written
on clay sheets or paper. If people can be expected
to create an account on Facebook or some forum,
then they are certainly willing to try out
other alternatives, including the Telegram.
Whether the Telegram sticks, I think that
probably NOT, because it is not totally anonymous,
not even totally private, depends only on one vendor
and that vendor applies some censorship just to
avoid being closed down by different governments,
but it differs from other temporary options by
having a nice Bot API and for public, censored,
conversations it will probably last as long as
GitHub lasts. GitHub will probably not last
forever and probably neither will the Telegram,
but for some reason people do not use CVS
that much any more, despite its popularity,
and none of the previous communications systems
seem to have such a fine Bot API combined with
relatively sane public policy and relatively
reliable technical implementation.

Nonetheless I find the issue of "e-mail not going away"
very 

[fossil-users] Where is Fossil bugtrack?

2016-01-31 Thread Martin Vahi

At first glance it sounds like a dumb
question, but I did not find a way, how to
add my feature request to the

https://www.fossil-scm.org/index.html/rptview?rn=1

I logged in as anonymous, but I still could not
find a way, how to add my feature request.

The feature request is that the Fossil v1.34
does not support the HTML5 Video tag,

http://www.w3schools.com/html/html5_video.asp

but I would like to add WebM

http://www.webmproject.org/

videos to my Fossil repositories. WebM
video player and codecs are built in to both,
Google Chrome and the Mozilla Firefox and one
of the particularities of the WebM format is
that huge videos can be served from ordinary
HTML-web server and scrolled without downloading
the whole file. The

http://archive.softf1.com/1968/2001_A_Space_Odyssey.webm

can be used as a demonstration of that WebM feature.
The file itself is about 280MiB. Another nice
"feature" of the WebM is that Google pays
for the layers that defend the format
against patent trolls.

What regards to the
"writing a good e-mail server"

https://youtu.be/jPSOxVjK8A0?t=51m56s

then I propose that it is skipped, because
e-mail is fundamentally outdated technology
and mailing lists can just move over to the

https://www.telegram.org/

that has the following history:

http://mashable.com/2015/05/18/russias-mark-zuckerberg-pavel-durov/

Thank You for reading this letter. :-)


Thankfully,
martin.v...@softf1.com





signature.asc
Description: OpenPGP digital signature
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users