On Fri, 10 Jan 2020, Joseph Myers wrote:
> And we should also mention configuring your email address for git, if you
> haven't used git on that system before or the default address you've
> configured for git isn't the one you want for GCC commits.
I've now applied this patch to document that.
On Fri, 10 Jan 2020, Joseph Myers wrote:
> In gitwrite.html: the various custom setup of checkouts for which scripts
> have been posted, including usage of branches not fetched by default; an
> actual example commit session should be included like the one in
> svnwrite.html; more examples
een some discussion of removing these generated files
+from GCC's Git source tree (there is no discussion of removing them
+from the released source tarballs). If that happens then
+building GCC from the Git source tree would require installing
+the above mentioned build tools. Installin
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
--- Comment #7 from Richard Biener ---
Author: rguenth
Date: Fri Dec 15 08:19:15 2017
New Revision: 255678
URL: https://gcc.gnu.org/viewcvs?rev=255678=gcc=rev
Log:
2017-12-15 Richard Biener
Backport from mainline
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Richard Biener changed:
What|Removed |Added
CC||andrey.y.guskov at intel dot
com
---
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
--- Comment #5 from Richard Biener ---
Author: rguenth
Date: Tue Sep 5 08:15:21 2017
New Revision: 251692
URL: https://gcc.gnu.org/viewcvs?rev=251692=gcc=rev
Log:
2017-09-05 Richard Biener
PR
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Richard Biener changed:
What|Removed |Added
Status|ASSIGNED|RESOLVED
Resolution|---
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Richard Biener changed:
What|Removed |Added
Status|NEW |ASSIGNED
Blocks|
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Markus Trippelsdorf changed:
What|Removed |Added
Priority|P3 |P1
Status|UNCONFIRMED
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Andreas Schwab changed:
What|Removed |Added
Keywords||build
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82102
Bug ID: 82102
Summary: [8 Regression] ICE: Segmentation fault in
/home/arnd/git/gcc/gcc/tree-ssa-pre.c:4863
Product: gcc
Version: 8.0
Status: UNCONFIRMED
Miles Bader mi...@gnu.org writes:
[...]
Do you use the http: or git: protocol for cloning? The official gcc git
repo only supports the old git http: protocol, which is almost useless
on slow/high-latency networks...
gcc.gnu.org does talk git:// too. By new http, you are probably
referring
On Sun, Feb 27, 2011 at 11:19 PM, Frank Ch. Eigler f...@redhat.com wrote:
Miles Bader mi...@gnu.org writes:
Do you use the http: or git: protocol for cloning? The official gcc git
repo only supports the old git http: protocol, which is almost useless
on slow/high-latency networks
Basile Starynkevitch bas...@starynkevitch.net writes:
PS. By the way, git clone-ing the GCC git repository takes a lot of time
from Europe. Perhaps having a daily tar ball of the result of that command
available by ftp would be very nice
Do you use the http: or git: protocol for cloning
Hello All,
Even with the help of very nice people and of the gcc@ list, I am unable to
use git for GCC MELT with ease. I tried this entire week without success
My only issue is merging the trunk into GCC MELT but since this is something
I am doing several times a week, it makes me
On 8 Dec 2007, Johannes Schindelin said:
Hi,
On Sat, 8 Dec 2007, J.C. Pizarro wrote:
On 2007/12/07, Linus Torvalds [EMAIL PROTECTED] wrote:
SHA1 is almost totally insignificant on x86. It hardly shows up. But
we have a good optimized version there.
If SHA1 is slow then why dont he
coming
up with a recipe to set up your own git-svn mirror. Suggestions on the
following.
// Create directory and initialize git
mkdir gcc
cd gcc
git init
// add the remote site that currently mirrors gcc
// I have chosen the name gcc.gnu.org *1* as my local name to refer to
// this choose something
On Fri, Dec 07, 2007 at 04:47:19PM -0800, Harvey Harrison wrote:
Some interesting stats from the highly packed gcc repo. The long chain
lengths very quickly tail off. Over 60% of the objects have a chain
length of 20 or less. If anyone wants the full list let me know. I
also have included
From: David Miller [EMAIL PROTECTED]
Date: Fri, 07 Dec 2007 04:53:29 -0800 (PST)
I should run oprofile...
While doing the initial object counting, most of the time is spent in
lookup_object(), memcmp() (via hashcmp()), and inflate(). I tried to
see if I could do some tricks on sparc with the
On Mon, 10 Dec 2007, Gabriel Paubert wrote:
On Fri, Dec 07, 2007 at 04:47:19PM -0800, Harvey Harrison wrote:
Some interesting stats from the highly packed gcc repo. The long chain
lengths very quickly tail off. Over 60% of the objects have a chain
length of 20 or less. If anyone wants
Hi,
On Sat, 8 Dec 2007, J.C. Pizarro wrote:
On 2007/12/07, Linus Torvalds [EMAIL PROTECTED] wrote:
SHA1 is almost totally insignificant on x86. It hardly shows up. But
we have a good optimized version there.
If SHA1 is slow then why dont he contribute adding Haval160 (3 rounds)
that
On Sat, 8 Dec 2007, J.C. Pizarro wrote:
1. Don't compress this repo but compact this uncompressed repo
using minimal spanning forest and deltas
2. After, compress this whole repo with LZMA (e.g. 48MiB) from 7zip
before
burning it to DVD for backup reasons or before
On Dec 8, 2007 8:53 PM, Joe Buck [EMAIL PROTECTED] wrote:
Mr. Pizarro has endless ideas, and he'll give you some new ones every day.
That's true.
He thinks that no one else knows any computer science, and he will attempt
to teach you what he knows,
It's not the only one ;-) is in good and
Where did have you read this ? I missed that part.
When you object
that he's wasting your time, he'll start talking about freedom of speech.
Actually he never spoke like that (probably I missed that part too).
Read gcc mailing list archives, if you have a lot of time on your hands.
need to build 64-bit
binaries.
[EMAIL PROTECTED]:~/src/GCC/git/test$ time git repack -a -d -f --window=250
--depth=250
Counting objects: 1190671, done.
fatal: Out of memory? mmap failed: Cannot allocate memory
real58m36.447s
user289m8.270s
sys 4m40.680s
[EMAIL PROTECTED]:~/src/GCC
of history. Ie just do a
time git blame -C gcc/regclass.c /dev/null
and see if the deeper delta chains are very expensive.
(Yeah, the above is pretty much designed to be the worst possible case for
this kind of aggressive history packing, but I don't know if that choice
of file to try
On 2007/12/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Fri, 7 Dec 2007, David Miller wrote:
Also I could end up being performance limited by SHA, it's not very
well tuned on Sparc. It's been on my TODO list to code up the crypto
unit support for Niagara-2 in the kernel, then work with
Some interesting stats from the highly packed gcc repo. The long chain
lengths very quickly tail off. Over 60% of the objects have a chain
length of 20 or less. If anyone wants the full list let me know. I
also have included a few other interesting points, the git default
depth of 50, my
On Fri, 2007-12-07 at 14:14 -0800, Jakub Narebski wrote:
Is SHA a significant portion of the compute during these repacks?
I should run oprofile...
SHA1 is almost totally insignificant on x86. It hardly shows up. But
we have a good optimized version there.
zlib tends to be a lot
On 12/7/2007 6:23 PM, Linus Torvalds wrote:
Is SHA a significant portion of the compute during these repacks?
I should run oprofile...
SHA1 is almost totally insignificant on x86. It hardly shows up. But we
have a good optimized version there.
zlib tends to be a lot more noticeable
On Fri, 7 Dec 2007, David Miller wrote:
Also I could end up being performance limited by SHA, it's not very
well tuned on Sparc. It's been on my TODO list to code up the crypto
unit support for Niagara-2 in the kernel, then work with Herbert Xu on
the userland interfaces to take advantage
On Fri, 7 Dec 2007, Jon Smirl wrote:
On 12/7/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jon Smirl wrote:
time git blame -C gcc/regclass.c /dev/null
[EMAIL PROTECTED]:/video/gcc$ time git blame -C gcc/regclass.c /dev/null
real1m21.967s
Giovanni Bajo [EMAIL PROTECTED] writes:
On 12/7/2007 6:23 PM, Linus Torvalds wrote:
Is SHA a significant portion of the compute during these repacks?
I should run oprofile...
SHA1 is almost totally insignificant on x86. It hardly shows up. But
we have a good optimized version there.
On Dec 7, 2007, at 2:14 PM, Jakub Narebski wrote:
Giovanni Bajo [EMAIL PROTECTED] writes:
On 12/7/2007 6:23 PM, Linus Torvalds wrote:
Is SHA a significant portion of the compute during these repacks?
I should run oprofile...
SHA1 is almost totally insignificant on x86. It hardly shows up. But
On 12/7/07, Giovanni Bajo [EMAIL PROTECTED] wrote:
On Fri, 2007-12-07 at 14:14 -0800, Jakub Narebski wrote:
Is SHA a significant portion of the compute during these repacks?
I should run oprofile...
SHA1 is almost totally insignificant on x86. It hardly shows up. But
we have a
From: Linus Torvalds [EMAIL PROTECTED]
Date: Fri, 7 Dec 2007 09:23:47 -0800 (PST)
On Fri, 7 Dec 2007, David Miller wrote:
Also I could end up being performance limited by SHA, it's not very
well tuned on Sparc. It's been on my TODO list to code up the crypto
unit support for
On Wed, Dec 05, 2007 at 11:49:21PM -0800, Harvey Harrison wrote:
git repack -a -d --depth=250 --window=250
Since I have the whole gcc repo locally I'll give this a shot overnight
just to see what can be done at the extreme end or things.
When I tried this on a very large repo, at
Harvey Harrison [EMAIL PROTECTED] writes:
git svn does accept a mailmap at import time with the same format as the
cvs importer I think. But for someone that just wants a repo to check
out this was easiest. I'd be willing to spend the time to do a nicer
job if there was any interest from
Hi,
On Wed, 5 Dec 2007, David Miller wrote:
From: Daniel Berlin [EMAIL PROTECTED]
Date: Wed, 5 Dec 2007 21:41:19 -0500
It is true I gave up quickly, but this is mainly because i don't like
to fight with my tools.
I am quite fine with a distributed workflow, I now use 8 or so gcc
Thursday 06 December 2007 13:57:06 Johannes Schindelin yazmıştı:
[...]
So I fully expect an issue like Daniel's to be resolved in a matter of
minutes on the git list, if the OP gives us a chance. If we are not even
Cc'ed, you are completely right, she or he probably does not want the
issue to
On Wed, 5 Dec 2007, Harvey Harrison wrote:
git repack -a -d --depth=250 --window=250
Since I have the whole gcc repo locally I'll give this a shot overnight
just to see what can be done at the extreme end or things.
Don't forget to add -f as well.
Nicolas
On Thu, 6 Dec 2007, Jeff King wrote:
On Thu, Dec 06, 2007 at 01:47:54AM -0500, Jon Smirl wrote:
The key to converting repositories of this size is RAM. 4GB minimum,
more would be better. git-repack is not multi-threaded. There were a
few attempts at making it multi-threaded but none were
On Thu, 6 Dec 2007, Jeff King wrote:
On Thu, Dec 06, 2007 at 09:18:39AM -0500, Nicolas Pitre wrote:
The downside is that the threading partitions the object space, so the
resulting size is not necessarily as small (but I don't know that
anybody has done testing on large repos to find
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Daniel Berlin wrote:
Actually, it turns out that git-gc --aggressive does this dumb thing
to pack files sometimes regardless of whether you converted from an
SVN repo or not.
Absolutely. git --aggressive is mostly
NightStrike [EMAIL PROTECTED] writes:
On 12/5/07, Daniel Berlin [EMAIL PROTECTED] wrote:
As I said, maybe i'll look at git in another year or so.
But i'm certainly going to ignore all the git is so great, we should
move gcc to it people until it works better, while i am much more
On Thu, 6 Dec 2007, Jeff King wrote:
What is really disappointing is that we saved only about 20% of the
time. I didn't sit around watching the stages, but my guess is that we
spent a long time in the single threaded writing objects stage with a
thrashing delta cache.
I don't think you
On Thu, 6 Dec 2007, Daniel Berlin wrote:
I worked on Monotone and other systems that use object stores. for a
little while :) In particular, I believe GIT's original object store was
based on Monotone, IIRC.
Yes and no.
Monotone does what git does for the blobs. But there is a big
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jeff King wrote:
What is really disappointing is that we saved only about 20% of the
time. I didn't sit around watching the stages, but my guess is that we
spent a long time in the single threaded writing objects
On Thu, Dec 06, 2007 at 09:18:39AM -0500, Nicolas Pitre wrote:
The downside is that the threading partitions the object space, so the
resulting size is not necessarily as small (but I don't know that
anybody has done testing on large repos to find out how large the
difference is).
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jeff King wrote:
What is really disappointing is that we saved only about 20% of the
time. I didn't sit around watching the stages, but my guess is that we
spent a long
On Thu, 6 Dec 2007, NightStrike wrote:
No disrespect is meant by this reply. I am just curious (and I am
probably misunderstanding something).. Why remove all of the
documentation entirely? Wouldn't it be better to just document it
more thoroughly?
Well, part of it is that I don't
On Thu, 2007-12-06 at 00:09, Linus Torvalds wrote:
Git also does delta-chains, but it does them a lot more loosely. There
is no fixed entity. Delta's are generated against any random other version
that git deems to be a good delta candidate (with various fairly
successful heursitics), and
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Daniel Berlin wrote:
Actually, it turns out that git-gc --aggressive does this dumb thing
to pack files sometimes regardless of whether you converted from an
SVN repo or not.
I'll send a patch to Junio to just
On 2007/12/06, Jon Smirl [EMAIL PROTECTED] wrote:
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jeff King wrote:
What is really disappointing is that we saved only about 20% of the
time. I didn't sit around watching the stages, but my guess is that we
spent
On 2007-12-06 10:15:17 -0800, Ian Lance Taylor wrote:
Distributed version systems like git or Mercurial have some advantages
over Subversion.
It's surprising that you don't mention svk, which is based on top
of Subversion[*]. Has anyone tried? Is there any problem with it?
[*] You have
Thursday 06 December 2007 21:28:59 Vincent Lefevre yazmıştı:
On 2007-12-06 10:15:17 -0800, Ian Lance Taylor wrote:
Distributed version systems like git or Mercurial have some advantages
over Subversion.
It's surprising that you don't mention svk, which is based on top
of Subversion[*]. Has
On Thu, 6 Dec 2007, Jon Loeliger wrote:
On Thu, 2007-12-06 at 00:09, Linus Torvalds wrote:
Git also does delta-chains, but it does them a lot more loosely. There
is no fixed entity. Delta's are generated against any random other version
that git deems to be a good delta candidate (with
Jon Loeliger [EMAIL PROTECTED] writes:
On Thu, 2007-12-06 at 00:09, Linus Torvalds wrote:
Git also does delta-chains, but it does them a lot more loosely. There
is no fixed entity. Delta's are generated against any random other version
that git deems to be a good delta candidate (with
Vincent Lefevre wrote:
It's surprising that you don't mention svk, which is based on top
of Subversion[*]. Has anyone tried? Is there any problem with it?
I must agree with Ismail's reply here. We have used svk for our
internal development for about two years, for the reason of easy
mirroring
On 2007/12/6, J.C. Pizarro [EMAIL PROTECTED], i wrote:
For multicores CPUs, don't divide the work in threads.
To divide the work in processes!
Tips, tricks and hacks: to use fork, exec, pipes and another IPC mechanisms
like
mutexes, shared memory's IPC, file locks, pipes, semaphores, RPCs,
On 12/6/07, Andrey Belevantsev [EMAIL PROTECTED] wrote:
Vincent Lefevre wrote:
It's surprising that you don't mention svk, which is based on top
of Subversion[*]. Has anyone tried? Is there any problem with it?
I must agree with Ismail's reply here. We have used svk for our
internal
Junio C Hamano [EMAIL PROTECTED] writes:
Jon Loeliger [EMAIL PROTECTED] writes:
I'd like to learn more about that. Can someone point me to
either more documentation on it? In the absence of that,
perhaps a pointer to the source code that implements it?
See
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
When I lasted looked at the code, the problem was in evenly dividing
the work. I was using a four core machine and most of the time one
core would end up with 3-5x the work of the lightest loaded core.
Setting pack.threads up to 20 fixed
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
When I lasted looked at the code, the problem was in evenly dividing
the work. I was using a four core machine and most of the time one
core would end up with 3-5x the work of the lightest loaded
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
When I lasted looked at the code, the problem was in evenly dividing
the work. I was using a four core machine and most of the time one
core
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
When I lasted looked at the code, the problem was in evenly dividing
the work. I was using a four core
Junio C Hamano [EMAIL PROTECTED] writes:
Junio C Hamano [EMAIL PROTECTED] writes:
Jon Loeliger [EMAIL PROTECTED] writes:
I'd like to learn more about that. Can someone point me to
either more documentation on it? In the absence of that,
perhaps a pointer to the source code that
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jon Smirl wrote:
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
When I lasted looked at the code, the problem was in evenly dividing
the work. I was using a four core machine and most of the time one
core
On Thu, 06 Dec 2007 23:26:07 +0100 David Kastrup wrote:
Junio C Hamano [EMAIL PROTECTED] writes:
Junio C Hamano [EMAIL PROTECTED] writes:
Jon Loeliger [EMAIL PROTECTED] writes:
I'd like to learn more about that. Can someone point me to
either more documentation on it? In the
On 12/6/07, Nicolas Pitre [EMAIL PROTECTED] wrote:
Well, that's possible with a window 25 times larger than the default.
Why did it never use more than three cores?
You have 648366 objects total, and only 647457 of them are subject to
delta compression.
With a window size of 250 and a
Linus Torvalds [EMAIL PROTECTED] writes:
On Thu, 6 Dec 2007, Jon Loeliger wrote:
I guess one question I posit is, would it be more accurate
to think of this as a delta net in a weighted graph rather
than a delta chain?
It's certainly not a simple chain, it's more of a set of acyclic
On Thu, 2007-12-06 at 13:04 -0500, Daniel Berlin wrote:
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
So the equivalent of git gc --aggressive - but done *properly* - is to
do (overnight) something like
git repack -a -d --depth=250 --window=250
I gave this a try
From: Jeff King [EMAIL PROTECTED]
Date: Thu, 6 Dec 2007 12:39:47 -0500
I tried the threaded repack with pack.threads = 3 on a dual-processor
machine, and got:
time git repack -a -d -f --window=250 --depth=250
real309m59.849s
user377m43.948s
sys 8m23.319s
On Thu, 6 Dec 2007, Jon Smirl wrote:
I have a 4.8GB git process with 4GB of physical memory. Everything
started slowing down a lot when the process got that big. Does git
really need 4.8GB to repack? I could only keep 3.4GB resident. Luckily
this happen at 95% completion. With 8GB of memory
On 12/6/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, NightStrike wrote:
No disrespect is meant by this reply. I am just curious (and I am
probably misunderstanding something).. Why remove all of the
documentation entirely? Wouldn't it be better to just document it
On Thu, 6 Dec 2007, Jon Smirl wrote:
time git blame -C gcc/regclass.c /dev/null
[EMAIL PROTECTED]:/video/gcc$ time git blame -C gcc/regclass.c /dev/null
real1m21.967s
user1m21.329s
Well, I was also hoping for a compared to not-so-aggressive packing
number
On Thu, Dec 06, 2007 at 01:02:58PM -0500, Nicolas Pitre wrote:
What is really disappointing is that we saved
only about 20% of the time. I didn't sit around watching the stages, but
my guess is that we spent a long time in the single threaded writing
objects stage with a thrashing delta
master with threaded delta
cd git
echo THREADED_DELTA_SEARCH = 1 config.mak
make install
# get the gcc pack
mkdir gcc cd gcc
git --bare init
git config remote.gcc.url git://git.infradead.org/gcc.git
git config remote.gcc.fetch \
'+refs/remotes/gcc.gnu.org/*:refs/remotes/gcc.gnu.org/*'
git remote
On 12/7/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Jon Smirl wrote:
time git blame -C gcc/regclass.c /dev/null
[EMAIL PROTECTED]:/video/gcc$ time git blame -C gcc/regclass.c /dev/null
real1m21.967s
user1m21.329s
Well, I was also hoping
On Thu, Dec 06, 2007 at 10:35:22AM -0800, Linus Torvalds wrote:
What is really disappointing is that we saved only about 20% of the
time. I didn't sit around watching the stages, but my guess is that we
spent a long time in the single threaded writing objects stage with a
thrashing
On Fri, Dec 07, 2007 at 01:50:47AM -0500, Jeff King wrote:
Yes, but balanced by one thread running out of data way earlier than the
other, and completing the task with only one CPU. I am doing a 4-thread
test on a quad-CPU right now, and I will also try it with threads=1 and
threads=6 for
. The procedure I am using is this:
# compile recent git master with threaded delta
cd git
echo THREADED_DELTA_SEARCH = 1 config.mak
make install
# get the gcc pack
mkdir gcc cd gcc
git --bare init
git config remote.gcc.url git://git.infradead.org/gcc.git
git config remote.gcc.fetch
Wednesday 05 December 2007 21:08:41 Daniel Berlin yazmıştı:
So I tried a full history conversion using git-svn of the gcc
repository (IE every trunk revision from 1-HEAD as of yesterday)
The git-svn import was done using repacks every 1000 revisions.
After it finished, I used git-gc
On 12/5/07, Daniel Berlin [EMAIL PROTECTED] wrote:
I already have two way sync with hg.
Maybe someday when git is more usable than hg to a normal developer,
or it at least is significantly smaller than hg, i'll look at it
again.
Sorry, what is hg?
On Dec 5, 2007 11:08 AM, Daniel Berlin [EMAIL PROTECTED] wrote:
So I tried a full history conversion using git-svn of the gcc
repository (IE every trunk revision from 1-HEAD as of yesterday)
The git-svn import was done using repacks every 1000 revisions.
After it finished, I used git-gc
On 12/5/07, NightStrike [EMAIL PROTECTED] wrote:
On 12/5/07, Daniel Berlin [EMAIL PROTECTED] wrote:
I already have two way sync with hg.
Maybe someday when git is more usable than hg to a normal developer,
or it at least is significantly smaller than hg, i'll look at it
again.
Sorry,
git-gc --aggressive
Daniel --prune. Two hours later, it finished. The final size after
Daniel this is 1.5 gig for all of the history of gcc for just trunk.
Most of the space is probably taken by the SVN specific data. To get
an idea of how GIT would handle GCC data, you should clone the GIT
with git, it may
even be usable!
But given that git is harder to use, requires manual repacking to get
any kind of sane space usage, and is 3x bigger anyway, i don't see any
advantage to continuing to experiment with git and gcc.
I already have two way sync with hg.
Maybe someday when git is more
On 12/5/07, Ollie Wild [EMAIL PROTECTED] wrote:
On Dec 5, 2007 11:08 AM, Daniel Berlin [EMAIL PROTECTED] wrote:
So I tried a full history conversion using git-svn of the gcc
repository (IE every trunk revision from 1-HEAD as of yesterday)
The git-svn import was done using repacks every 1000
don't see any
advantage to continuing to experiment with git and gcc.
I already have two way sync with hg.
Maybe someday when git is more usable than hg to a normal developer,
or it at least is significantly smaller than hg, i'll look at it
again.
For now, it seems a net loss.
--Dan
of the pack directory.
Everyone tells me that svn specfic data is in .svn, so i am
disinclined to believe this.
Also, given that hg can store the svn data without this kind of
penalty, it's just another strike against git.
To get
an idea of how GIT would handle GCC data, you should clone the GIT
taken by the SVN specific data. To get
an idea of how GIT would handle GCC data, you should clone the GIT
directory or checkout one from infradead.org:
% git clone git://git.infradead.org/gcc.git
Actually I went through and created the basis for that repo. It
contains all branches
of the history of gcc for just trunk.
Most of the space is probably taken by the SVN specific data. To get
an idea of how GIT would handle GCC data, you should clone the GIT
directory or checkout one from infradead.org:
% git clone git://git.infradead.org/gcc.git
Actually I went
an idea of how GIT would handle GCC data, you should clone the GIT
directory or checkout one from infradead.org:
% git clone git://git.infradead.org/gcc.git
Actually I went through and created the basis for that repo. It
contains all branches and tags in the gcc svn repo and the final
pack
it smaller.
I'm sure if i spent the next few weeks fucking around with git, it may
even be usable!
But given that git is harder to use, requires manual repacking to get
any kind of sane space usage, and is 3x bigger anyway, i don't see any
advantage to continuing to experiment with git and gcc.
I
On 12/5/07, Daniel Berlin [EMAIL PROTECTED] wrote:
As I said, maybe i'll look at git in another year or so.
But i'm certainly going to ignore all the git is so great, we should
move gcc to it people until it works better, while i am much more
inclined to believe the hg is so great, we should
experience
working with the Linux kernel. From what I've heard, both do the job
reasonably well.
Thanks to git-svn, using Git to develop GCC is practical with or
without explicit support from the GCC maintainers. As I see it, the
main barrier is the inordinate amount of time it takes to bring up
usage, and is 3x bigger anyway, i don't see any
advantage to continuing to experiment with git and gcc.
I would really appreciate it if you would share experiences
like this with the GIT community, who have been now CC:'d.
That's the only way this situation is going to improve.
When you don't CC
is harder to use, requires manual repacking to get
any kind of sane space usage, and is 3x bigger anyway, i don't see any
advantage to continuing to experiment with git and gcc.
I would really appreciate it if you would share experiences
like this with the GIT community, who have been now CC:'d
From: Daniel Berlin [EMAIL PROTECTED]
Date: Wed, 5 Dec 2007 21:41:19 -0500
It is true I gave up quickly, but this is mainly because i don't like
to fight with my tools.
I am quite fine with a distributed workflow, I now use 8 or so gcc
branches in mercurial (auto synced from svn) and merge a
1 - 100 of 115 matches
Mail list logo