> My guess is that you are probably overflowing a 32-bit integer
> someplace. If so, I fear that this will not be something easily
> fixed.
>
Looks like it's the case at least in one place.
There's int64 => unsigned int overflow at the call to blob_resize()
(blob.c:865
On Aug 6, 2018, at 11:30 AM, Philip Bennefall wrote:
>
> a second commit right afterwards caused a segfault again
In advance of drh getting time to work on this, maybe you could give two
debugging steps on your end:
1. If you’re doing this on a platform that will run Valgrind, try running a
On 8/6/18, Philip Bennefall wrote:
> But I wanted to report our experience in case it is of use to the
> developers, and in case doing so could give us some assistance with the
> immediate issue.
Thank you for the report. This is definitely something that should be
fixed. But I have a large
The following command solved the issue at least for the moment:
fossil rebuild --vacuum --analyze --compress
I'm not yet sure which of the options made the difference, but I wanted
to report back nevertheless.
Kind regards,
Philip Bennefall
On 8/6/2018 6:57 PM, Philip Bennefall wrote:
OK.
On 8/6/18, Richard Hipp wrote:
> On 8/6/18, Philip Bennefall wrote:
>> Do you have any recommendations for something we could try in order to
>> get more information, or would you suggest that we switch to another
>> DVCS if we need to store files of these sizes?
>
> Regardless of the problem, I
OK. I'll investigate. Thanks for the quick response.
Kind regards,
Philip Bennefall
On 8/6/2018 6:50 PM, Richard Hipp wrote:
On 8/6/18, Philip Bennefall wrote:
Do you have any recommendations for something we could try in order to
get more information, or would you suggest that we switch
On 8/6/18, Philip Bennefall wrote:
> Do you have any recommendations for something we could try in order to
> get more information, or would you suggest that we switch to another
> DVCS if we need to store files of these sizes?
Regardless of the problem, I don't think *any* DVCS is appropriate
Do you have any recommendations for something we could try in order to
get more information, or would you suggest that we switch to another
DVCS if we need to store files of these sizes?
Kind regards,
Philip Bennefall
On 8/6/2018 6:33 PM, Richard Hipp wrote:
On 8/6/18, Philip Bennefall
On 8/6/18, Philip Bennefall wrote:
> We have a repository in our organization which stores a number of rather
> large binary files. When attempting to commit, we sometimes get
> something like this:
>
>
> ERROR: [largefile.bin is 999378424 bytes on disk but 210746789 in the
> repository
My guess
Additional information: The repo checksum is disabled locally. When
enabling it, we get:
Segmentation fault: 11
Kind regards,
Philip Bennefall
On 8/6/2018 6:11 PM, Philip Bennefall wrote:
We have a repository in our organization which stores a number of
rather large binary files. When
10 matches
Mail list logo