On Wednesday 26 November 2008, Jason Dagit wrote:
> On Tue, Nov 25, 2008 at 7:56 PM, Robin Bate Boerop 
<[EMAIL PROTECTED]>wrote:
> > Jason,
> >
> > Thanks for caring.  Here's the output of 'darcs show repo':
> >          Type: darcs
> >        Format: hashed, darcs-2
> >          Root: /home/bnnb/web/BNNB_Plone3
> >      Pristine: HashedPristine
> >         Cache: thisrepo:/home/bnnb/web/BNNB_Plone3
> >   Num Patches: 0
> >
> > Yes, that's right - this is the first 'darcs record'.  Previously, I
> > tried adding the same files to another repo; it failed, and I thought
> > there was a corruption in the repo.  So, I made a new one, and tried
> > again.  Same result.
> >
> > Something different about this repo: I've removed almost everything
> > from _darcs/prefs/boring, because I really do want everything in the
> > repo - binary files and all.  The 'darcs record' is trying to add 211
> > MB of files to the repo, spread across about 30,000 files.
>
> I bet 211MB is fine.  I think we can test that fairly easily too.  But,
> 30,000 files could be a problem.  I know that the sheer number of files

ext3 has a limit on the number of files in a directory at 32768. it has 
the same limit for the number of hard links to a file.
As far a I know the darcs-2 hashed format keeps all the hashed files in a 
single directory. Same for patches. Maybe it did hit that limit and the 
reported error is just obscure.

Try the non-hashed format, or the darcs1 format and if the error goes 
away, this may be the issue.

> has been a problem in the past and the problem was fixed, but it could
> have come back.  I certainly don't recall us having any stress tests of
> that magnitude.
>
> Any volunteers for this?  I'm imagining it should be pretty easy to
> write a script that just creates lots of files in an empty repository
> and then tries to record them all.
>
> Thanks,
> Jason



-- 
Dan
_______________________________________________
darcs-users mailing list
[email protected]
http://lists.osuosl.org/mailman/listinfo/darcs-users

Reply via email to