From: David Lang da...@lang.hm
well, as others noted, the problem is actually caused by doing the diffs, and
that is something that is a very common thing to do with source code.
To some degree, my attitude comes from When I Was A Boy, when you got
16k for both your bytecode and your data,
On Tue, May 27, 2014 at 11:47 PM, Dale R. Worley wor...@alum.mit.edu wrote:
I've discovered a problem using Git. It's not clear to me what the
correct behavior should be, but it seems to me that Git is failing
in an undesirable way.
The problem arises when trying to handle a very large file.
Am 27.05.2014 18:47, schrieb Dale R. Worley:
Even doing a 'git reset' does not put the repository in a state where
'git fsck' will complete:
You have to remove the offending commit also from the reflog.
The following snipped creates an offending commit, big_file is 2GB which
is too large for
Duy Nguyen pclo...@gmail.com writes:
$ git fsck --full --strict
notice: HEAD points to an unborn branch (master)
Checking object directories: 100% (256/256), done.
fatal: Out of memory, malloc failed (tried to allocate 21474836481 bytes)
Back trace for this one
...
Not
From: Duy Nguyen pclo...@gmail.com
I don't know how many commands are hit by this. If you have time and
gdb, please put a break point in die_builtin() function and send
backtraces for those that fail. You could speed up the process by
creating a smaller file and set the environment variable
From: Junio C Hamano gits...@pobox.com
You need to have enough memory (virtual is fine if you have enough
time) to do fsck. Some part of index-pack could be refactored into
a common helper function that could be called from fsck, but I think
it would be a lot of work.
How much memory is
On Wed, 28 May 2014, Dale R. Worley wrote:
From: Duy Nguyen pclo...@gmail.com
I don't know how many commands are hit by this. If you have time and
gdb, please put a break point in die_builtin() function and send
backtraces for those that fail. You could speed up the process by
creating a
From: David Lang da...@lang.hm
Git was designed to track source code, there are warts that show up
in the implementation when you use individual files 4GB
I'd expect that if you want to deal with files over 100k, you should
assume that it doesn't all fit in memory.
Dale
--
To unsubscribe
David Lang da...@lang.hm writes:
On Wed, 28 May 2014, Dale R. Worley wrote:
It seems that much of Git was coded under the assumption that any file
could always be held entirely in RAM. Who made that mistake? Are
people so out of touch with reality?
Git was designed to track source code,
On Wed, 28 May 2014, Dale R. Worley wrote:
From: David Lang da...@lang.hm
Git was designed to track source code, there are warts that show up
in the implementation when you use individual files 4GB
I'd expect that if you want to deal with files over 100k, you should
assume that it doesn't
On Wed, 28 May 2014, Junio C Hamano wrote:
David Lang da...@lang.hm writes:
On Wed, 28 May 2014, Dale R. Worley wrote:
It seems that much of Git was coded under the assumption that any file
could always be held entirely in RAM. Who made that mistake? Are
people so out of touch with
11 matches
Mail list logo