On Sun, Jul 26, 2015 at 11:12 PM, Damien Sykes-Pendleton <
[email protected]> wrote:

>   As it happens, I have a few questions on my mind at the moment, that
> I’m hoping somebody can help me with.
>

Good morning and welcome aboard!


> 1. I understand that Fossil repositories can more or less have thousands
> upon thousands of commits, which could make this question sound daft, but I
> will ask anyway since my project isn’t yet this big. Is there any way to
> dump all, or at least several commits onto disk at once, whether through
> updates, checkouts, zip archives etc?
>

Not if i understand your question properly. You can dump out individual
commits via the zip and tar commands, and you can export the whole database
to git using one of the git export commands, but you cannot export anything
between that - not 2 or 3 or 4 commits into a single bundle. If you want to
create a zip file for every single version, that can be done with a small
bit of scripting by looping over the timeline data.


> 2. Exactly how much memory is needed to generate a zip file of
> approximately 750MB? When I’m trying to zip
>

There is no direct answer to that: It is not a 1-to-1 mapping, but it is
effectively a linear cost on the size of the repo and the size of the
largest single file. The zip file itself is created in memory and its size
is directly related to the compressed size of each file stored in it.
However, getting the uncompressed content from the repository before it is
placed in the zip file is relatively expensive - _very roughly_ double the
size of the file in most cases (but this can vary widely). So the total
concurrent RAM cost is _approximately_ 2x the largest uncompressed file
plus the compressed size of all files combined, but there is also other
overhead involved which is more difficult to quantify.


> any commit of or over that size I am getting a “Fossil internal error out
> of memory” message, despite that at the time I had 2.6GB free, and over a
> terabyte of free disk space. Fossil itself was reportedly only using
> approximately 350MB before it came up with the message.
>

If you have very large files, they may be problematic in this regard. e.g.
a 250MB file might need 500MB-600MB or more concurrently to get extracted
from the repository (because deltas normally have to be applied to create
it). Specifically, it is generally more expensive to extract older versions
than it is newer versions, as fossil tries to keep the latest copy of each
file in its normal form, and store historical versions as deltas (this is
the opposite of many SCMs, which store newer versions as deltas). So going
further back in history, it may have to apply far more deltas, which is not
cheap in terms of RAM.

-- 
----- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
_______________________________________________
fossil-users mailing list
[email protected]
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to